Announcement

Collapse
No announcement yet.

Windows Server 2019 vs. Linux vs. FreeBSD Gigabit & 10GbE Networking Performance

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Windows Server 2019 vs. Linux vs. FreeBSD Gigabit & 10GbE Networking Performance

    Phoronix: Windows Server 2019 vs. Linux vs. FreeBSD Gigabit & 10GbE Networking Performance

    FreeBSD 12.0, Windows Server 2019, and five Linux distributions were tested for comparing the Gigabit and 10GbE networking performance as part of our latest benchmarks. Additionally, the performance was looked at for the Mellanox 10GbE adapter when also using the company's Linux tuning script compared to the out-of-the-box performance on the enterprise Linux distribution releases.

    http://www.phoronix.com/vr.php?view=27451

  • #2
    As you compare Tuned and not Linux distributions it would be also to see FreeBSD 12 Tuned results.

    Use settings from this guide:
    https://calomel.org/freebsd_network_tuning.html

    Comment


    • #3
      I have to assume the frame size is at default, 1518 bytes.

      Probably not material, but the Mellanox Connect X2 are PCIe v2.0 x8 adapters and they do support packet processing offload.

      I use the same Mellanox cards at home and they are very well supported.

      Your performance can vary. I have run those same cards on some cheaper consumer type motherboards with plenty of lanes available and I would get bus stalls on some.

      But the 10GbE is very usable. I have run remote CUDA across it to increase graphics performance locally and it makes a huge difference.



      Comment


      • #4
        I just upgraded my home network to 10GbE so the timing of this article is perfect!

        My recently retired dual X5650 ESXi rig now has a new lease on life. I'm very interested to see how it performs as an Ubuntu rig equipped with a 10GbE NIC. A 10Gtek Mellanox Connect X2 is now on its way.

        Comment


        • #5
          Was one side of each test always running Ubuntu 18.10, non-tuned ?
          Was the tuning done with latency profile, or a throughput profile?

          I’m wondering if you use a double Xeon machine as a server and network performance matters, wouldn’t you spend more than $20 on a network card?

          Comment


          • #6
            Originally posted by indepe View Post
            Was one side of each test always running Ubuntu 18.10, non-tuned ?
            Was the tuning done with latency profile, or a throughput profile?

            I’m wondering if you use a double Xeon machine as a server and network performance matters, wouldn’t you spend more than $20 on a network card?
            Why would he spend $200+ and the only benefit is a fixed copper or fiber connection type?

            The largest cost of 10GbE is the physical connection. By buying an adapter that has an empty SFP+ slot, he maintains flexibility of using different physical connection types (which can be expensive) and can still use low cost passive coax for near distant equipment.

            Buying a $300+ dual port Intel or Broadcom based adapter is fine and dandy if you need 24x7 support, but for simple throughput testing with proven and supported devices, the Mellanox NIC is perfect.

            Comment


            • #7
              Originally posted by edwaleni View Post

              Why would he spend $200+ and the only benefit is a fixed copper or fiber connection type?

              The largest cost of 10GbE is the physical connection. By buying an adapter that has an empty SFP+ slot, he maintains flexibility of using different physical connection types (which can be expensive) and can still use low cost passive coax for near distant equipment.

              Buying a $300+ dual port Intel or Broadcom based adapter is fine and dandy if you need 24x7 support, but for simple throughput testing with proven and supported devices, the Mellanox NIC is perfect.
              Somehow I'm not sure if your response suggests that I see a problem with SFP+ or with Mellanox as a brand, which is (of course) not the case at all. From what I read, at least a ~$200 single port Mellanox card is able to fully saturate even a 100 Gb line, using a single and simple Xeon CPU, so I am definitely not talking about the brand being used.

              Perhaps a $20 network card will not be a bottleneck in the context of testing latency and throughput on a 10 Gb network, but without that being tested by itself, I'd wonder if the tests are not just depending too much on an OS's ability to deal with the specific shortcomings of a specific card.

              Comment


              • #8
                Originally posted by indepe View Post

                Somehow I'm not sure if your response suggests that I see a problem with SFP+ or with Mellanox as a brand, which is (of course) not the case at all. From what I read, at least a ~$200 single port Mellanox card is able to fully saturate even a 100 Gb line, using a single and simple Xeon CPU, so I am definitely not talking about the brand being used.

                Perhaps a $20 network card will not be a bottleneck in the context of testing latency and throughput on a 10 Gb network, but without that being tested by itself, I'd wonder if the tests are not just depending too much on an OS's ability to deal with the specific shortcomings of a specific card.
                I only brought up brand names because cards from those NIC chipset suppliers tend to run higher in price, especially when using a fixed physical connection type.

                Ultimately I think a good test would be to connect 1 server, using the same NIC against a Xena 40Gbps test platform. Boot each OS and run it through the paces.

                https://xenanetworks.com

                Eliminates variances, questions about the switch forwarding rates, and isolates the traffic to one activity (instead of three).

                Since I suspect Warren Buffet is not an active reader of Phoronix, I doubt Michael can set aside the needed Benjamins to acquire the said Xena. Therefore he has to test with the best possible combination, Mellanox cards, Ubiquiti switch and another 10GbE capable host. This would probably be a good representation of what his test users would use as well.

                I finally got time to read through the Ethr documentation, There are still a lot of enhancements in its pipeline, but its good to see that PTS has something to work with.

                Comment


                • #9
                  For those who run stuff like this at home.. I know, I don't NEED it, I just WANT it :-)

                  What are the cables used to connect Mellanox cards? IIRC, some kind of fiber / coax that gets really expensive for longer runs, like wiring up a house?

                  Comment


                  • #10
                    Originally posted by vw_fan17 View Post
                    For those who run stuff like this at home.. I know, I don't NEED it, I just WANT it :-)

                    What are the cables used to connect Mellanox cards? IIRC, some kind of fiber / coax that gets really expensive for longer runs, like wiring up a house?
                    Cables I used (and other parts) outlined in: https://www.phoronix.com/scan.php?pa...-4distro&num=2

                    At least for the shorter runs, the cables weren't expensive. Haven't looked at the cost of any long runs yet until having a 10GbE backbone rather than just this one rack.
                    Michael Larabel
                    http://www.michaellarabel.com/

                    Comment

                    Working...
                    X