Announcement

Collapse
No announcement yet.

Intel Provides Linux PCI Express NTB Support

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel Provides Linux PCI Express NTB Support

    Phoronix: Intel Provides Linux PCI Express NTB Support

    Intel has provided Linux kernel support for PCI Express Non-Transparent Bridges (NTB). PCI-E NTB allows for interconnecting multiple systems using PCI Express...

    http://www.phoronix.com/vr.php?view=MTE0MDE

  • #2
    So using Thunderbolt as a cluster interconnect now possible?

    Comment


    • #3
      Sounds promising, although a lot of the wording got over my head...

      "I was writing something on my scratchpad when the doorbell rang. I looked out the window, but all I saw was a non-transparent bridge."

      Comment


      • #4
        Looks like it...

        Originally posted by shiftyphil View Post
        So using Thunderbolt as a cluster interconnect now possible?
        That would appear to be a possible application. This could be awesome!

        They may also be using this with their Xeon Phi HPC coprocessor (Knight's Corner). The Xeon Phi is a PCIe card that runs embedded Linux, so you would need a way for two Linux systems to interact over PCIe: http://www.anandtech.com/show/6017/i...c-goes-retail/

        See section 3.3.4: http://download.intel.com/design/int...ers/323328.pdf

        Comment


        • #5
          Wicked cool!

          What is the API going to look like? It's basically RDMA so maybe they can use that API? Really this is not all that different from Infiniband.

          Comment


          • #6
            "Non-transparent bridge"

            The idea is that you have a super-high bandwidth, super-low latency connection. Your first instinct is to use a "transparent" bridge and just connect together the memory spaces of the two systems. Then you think some more and you realize that this has no security and no robustness at all. With some more thinking you can solve these issues and lash the two systems together with hardware that is minimal and fast, but the bridge is arbitrating things so that the systems can still exchange data at mind-blowing rates and latencies.

            This is precisely the mission statement of Infiniband.

            I've worked with Infiniband RDMA which is pretty similar. One system allocates a shared memory buffer and hands a reference to the other. The other system writes to the memory block and then signals that it's done. It looks a lot like the classic inter-process "shared memory". This stuff is all happening in nanosecond timeframes, so running a network protocol on top of this is just a sad loss of bandwidth and latency. If you really want to see the pure speed then you have to program at the RDMA layer.

            I've set up NFS over Infiniband RDMA using a fast RAID array. This is not your grandfather's NFS. It's not really possible to distinguish it from local disk, even a monster RAID array. The bonnie results on the client and the server are pretty similar.

            If you want to mess around with Infiniband, you can get used gear on ebay for super cheap. It's probably going to lose out in the long run as an interconnect, but it's fun to play with and you can't argue with the prices.

            Comment


            • #7
              Interesting. I can see that using LAN as means for connecting things can be slower, but by what amount? Also, what connectors would NTB use? I can see Thunderbolt as a possibility, but I'd imagine there should be something else as well.

              Comment

              Working...
              X