Announcement

Collapse
No announcement yet.

Mellanox Platform Support Coming In Linux 4.9

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Mellanox Platform Support Coming In Linux 4.9

    Phoronix: Mellanox Platform Support Coming In Linux 4.9

    The x86/platform updates for the Linux 4.9 kernel that entered development on Sunday is bringing initial support for the Mellanox systems platform...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    So what application does hardware this advanced have? I would imagine you are getting what you're paying for, so what kind of system would you use that would not bottleneck it?

    Comment


    • #3
      Think Supercomputers and High End Servers. So.....hand tuned nuclear weapon simulations, meteorological models, High Frequency Trading programs (HFT).......think Goldman Sachs front running everybody else's trade by a nanosecond and for a tenth of a penny difference in price every millisecond of every second of every minute.....well....you get the picture.

      Comment


      • #4
        Originally posted by Rubble Monkey View Post
        So what application does hardware this advanced have? I would imagine you are getting what you're paying for, so what kind of system would you use that would not bottleneck it?
        As Michael wrote, "Mellanox products are widely-used in the high-end HPC market and data centers." HPC stands for high performance computing.

        Comment


        • #5
          Mellanox is a switch && asics manufacturer. (and very opensource friendly)

          Note that mellanox already support linux since a long time, and also support cumulus linux and other free network os on their last x86 switches.

          This is all about switchdev infrasctucture.


          I'm not sure what is this new "mellanox platform"

          Comment


          • #6
            I'd love to get my hands on some of that kit.

            Infiniband has several major advantages over Ethernet:

            * Remote DMA! Meaning computer A can write to memory region in computer B bypassing CPU entirely (Direct Memory Access). This can be used for all kinds of cool things.
            * Very low latency. With ethernet, it's difficult to get <1 millisecond ping times. You can get MICROsecond ping times with Infiniband, which is 3 orders of magnitude better.
            * Good throughput. QDR links from 2007 do 40 gigabits. More recent standards do 96-290?

            Some things like distributed caches or distributed computing systems benefit hugely from things like that.

            And yes, normal clustering (if you can parallelize workload) with ethernet will give you scalability, but it will NOT give you low latency response times. 10GbE or 100GbE can give you some of Infinibands advantages, but 10GbE or 100GbE kit is also expensive.

            BTW, I have seen used Infiniband kit on Ebay for more acceptable prices. Maybe when I have some time (and space) I'll buy some and experiment with it.



            --Coder

            Originally posted by Rubble Monkey View Post
            So what application does hardware this advanced have? I would imagine you are getting what you're paying for, so what kind of system would you use that would not bottleneck it?

            Comment


            • #7
              I'd love to get my hands on some of that kit.

              Infiniband has several major advantages over Ethernet:

              * Remote DMA! Meaning computer A can write to memory region in computer B bypassing CPU entirely (Direct Memory Access). This can be used for all kinds of cool things.
              * Very low latency. With ethernet, it's difficult to get &lt;1 millisecond ping times. You can get MICROsecond ping times with Infiniband, which is 3 orders of magnitude better.
              * Good throughput. QDR links from 2007 do 40 gigabits. More recent standards do 96-290?

              Some things like distributed caches or distributed computing systems benefit hugely from things like that.

              And yes, normal clustering (if you can parallelize workload) with ethernet will give you scalability, but it will NOT give you low latency response times. 10GbE or 100GbE can give you some of Infinibands advantages, but 10GbE or 100GbE kit is also expensive.

              BTW, I have seen used Infiniband kit on Ebay for more acceptable prices. Maybe when I have some time (and space) I'll buy some and experiment with it.

              You can read more on Wikipedia

              --Coder

              Originally posted by Rubble Monkey View Post
              So what application does hardware this advanced have? I would imagine you are getting what you're paying for, so what kind of system would you use that would not bottleneck it?

              Comment


              • #8
                Originally posted by spirit View Post
                Mellanox is a switch && asics manufacturer. (and very opensource friendly)
                The are also on the RISC-V foundation as platinum founding members

                Comment

                Working...
                X