Results 1 to 7 of 7

Thread: Intel Provides Linux PCI Express NTB Support

  1. #1
    Join Date
    Jan 2007
    Posts
    14,912

    Default Intel Provides Linux PCI Express NTB Support

    Phoronix: Intel Provides Linux PCI Express NTB Support

    Intel has provided Linux kernel support for PCI Express Non-Transparent Bridges (NTB). PCI-E NTB allows for interconnecting multiple systems using PCI Express...

    http://www.phoronix.com/vr.php?view=MTE0MDE

  2. #2
    Join Date
    Dec 2009
    Posts
    2

    Default

    So using Thunderbolt as a cluster interconnect now possible?

  3. #3
    Join Date
    Sep 2008
    Location
    Vilnius, Lithuania
    Posts
    2,563

    Default

    Sounds promising, although a lot of the wording got over my head...

    "I was writing something on my scratchpad when the doorbell rang. I looked out the window, but all I saw was a non-transparent bridge."

  4. #4
    Join Date
    Apr 2012
    Location
    Riverside, California, USA
    Posts
    2

    Default Looks like it...

    Quote Originally Posted by shiftyphil View Post
    So using Thunderbolt as a cluster interconnect now possible?
    That would appear to be a possible application. This could be awesome!

    They may also be using this with their Xeon Phi HPC coprocessor (Knight's Corner). The Xeon Phi is a PCIe card that runs embedded Linux, so you would need a way for two Linux systems to interact over PCIe: http://www.anandtech.com/show/6017/i...c-goes-retail/

    See section 3.3.4: http://download.intel.com/design/int...ers/323328.pdf

  5. #5
    Join Date
    Jul 2009
    Posts
    351

    Default Wicked cool!

    What is the API going to look like? It's basically RDMA so maybe they can use that API? Really this is not all that different from Infiniband.

  6. #6
    Join Date
    Jul 2009
    Posts
    351

    Default "Non-transparent bridge"

    The idea is that you have a super-high bandwidth, super-low latency connection. Your first instinct is to use a "transparent" bridge and just connect together the memory spaces of the two systems. Then you think some more and you realize that this has no security and no robustness at all. With some more thinking you can solve these issues and lash the two systems together with hardware that is minimal and fast, but the bridge is arbitrating things so that the systems can still exchange data at mind-blowing rates and latencies.

    This is precisely the mission statement of Infiniband.

    I've worked with Infiniband RDMA which is pretty similar. One system allocates a shared memory buffer and hands a reference to the other. The other system writes to the memory block and then signals that it's done. It looks a lot like the classic inter-process "shared memory". This stuff is all happening in nanosecond timeframes, so running a network protocol on top of this is just a sad loss of bandwidth and latency. If you really want to see the pure speed then you have to program at the RDMA layer.

    I've set up NFS over Infiniband RDMA using a fast RAID array. This is not your grandfather's NFS. It's not really possible to distinguish it from local disk, even a monster RAID array. The bonnie results on the client and the server are pretty similar.

    If you want to mess around with Infiniband, you can get used gear on ebay for super cheap. It's probably going to lose out in the long run as an interconnect, but it's fun to play with and you can't argue with the prices.

  7. #7
    Join Date
    Sep 2008
    Location
    Vilnius, Lithuania
    Posts
    2,563

    Default

    Interesting. I can see that using LAN as means for connecting things can be slower, but by what amount? Also, what connectors would NTB use? I can see Thunderbolt as a possibility, but I'd imagine there should be something else as well.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •