Announcement

Collapse
No announcement yet.

NVIDIA Preparing Their Linux InfiniBand Driver For 800Gb/s XDR

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • NVIDIA Preparing Their Linux InfiniBand Driver For 800Gb/s XDR

    Phoronix: NVIDIA Preparing Their Linux InfiniBand Driver For 800Gb/s XDR

    NVIDIA's latest patches intended for the upstream Linux kernel are over on the networking side of the house with their Mellanox wares as they prepare 800Gb/s (XDR) support within the RDMA/InfiniBand code...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    JFC, how do you feed such a beast?! Even PCIe 5 x16 provides only ~500Gb/s! Do they somehow use an x32 link?

    Comment


    • #3
      Originally posted by kobblestown View Post
      JFC, how do you feed such a beast?! Even PCIe 5 x16 provides only ~500Gb/s! Do they somehow use an x32 link?
      Yes, the OCP (Open Compute Project) NIC 3.0 specification allows for x32 devices. NVIDIA (Mellanox) Connect-X 7 family supports PCIe 5.0 x32, but not the XDR mode.

      Comment


      • #4
        Originally posted by kobblestown View Post
        JFC, how do you feed such a beast?! Even PCIe 5 x16 provides only ~500Gb/s! Do they somehow use an x32 link?
        Datacenters often use technologies not available on regular computers you can set on or under your desk. They're moving their IO next to their processors on board the modules, or very adjacent to them. Keep in mind that these systems aren't the cheap systems we have sitting under your desk. HPC GPUs are 10s of thousands of dollars for a reason. They have a different architecture and hardware to squeeze out as much performance as they can, including onboard supplementary busses to connect modules directly via NVlink (in Nvidia's case), or Infiniband transceivers to talk to other modules or an intermediary Infiniband switch, etc. Closest thing a normal desktop user will see to this is the NVLink on top of Nvidia's cards that allow GPU communication faster than the system bus can manage, or Apple's M-class systems where the CPU, system RAM, and long-term data storage are apparently all on the same bus (and probably why it's all soldered together, signal tolerances are possibly too tight for the inherent signal loss in any mechanical connector).

        Look up "Host Channel Adapter" and "Infiiniband" HCA. There's various levels depending on the generation and throughput advertised. Some use PCI-e, some are undoubtedly a special HCA slot on the modules themselves. Someone that's more familiar with current HPC system hardware will probably be able to correct and expand on what's going on. My personal knowledge is antiquated at this point.

        Comment


        • #5
          Michael

          FYI: https://www.linuxjournal.com/content...ngterm-support

          Comment


          • #6
            "Datacenters often use technologies not available on regular computers..."

            I'm thinking this tech is the domain of supercomputers rather than mere datacenters.

            Comment

            Working...
            X