Announcement

Collapse
No announcement yet.

10GbE Linux Networking Performance Between CentOS, Fedora, Clear Linux & Debian

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by cb88 View Post

    Deploy... do you even know what Haiku is? The nighties are usually pretty stable which is why nobody bother'd to do a Beta release for 7 years.
    Haiku? Heroku I've heard of. Pretty famous thingy. Never used it tho'. & I don't know a thing about their internals

    Comment


    • #22
      @Michael: Broken geometric mean ...

      So, as there is no mention how the geometric mean is calculated here it is somewhat difficult to guess, but the values are most likely garbage!

      Compare e.g. the values for Fedora and Clear on Server 1:
      • Nuttcp and Iperf are clear wins for Fedora (4 cases)
      • Ethr is more or less tied, small advantage for Clear (3 cases)
      • Netperf is one win, one loss, but favorable for Fedora
      So 5 considerable wins or Fedora, 4 marginal wins for Clear, for me this sounds like a should-be win for Fedora, but the bar chart tells otherwise. LibreOffice's GeoMean comes to the same conclusion (same weight for all 9 test cases, using the reciprocal for the Ethr latency).

      Or is there some odd weighting going on?

      Comment


      • #23
        Originally posted by StefanBruens View Post
        @Michael: Broken geometric mean ...

        So, as there is no mention how the geometric mean is calculated here it is somewhat difficult to guess, but the values are most likely garbage!

        Compare e.g. the values for Fedora and Clear on Server 1:
        • Nuttcp and Iperf are clear wins for Fedora (4 cases)
        • Ethr is more or less tied, small advantage for Clear (3 cases)
        • Netperf is one win, one loss, but favorable for Fedora
        So 5 considerable wins or Fedora, 4 marginal wins for Clear, for me this sounds like a should-be win for Fedora, but the bar chart tells otherwise. LibreOffice's GeoMean comes to the same conclusion (same weight for all 9 test cases, using the reciprocal for the Ethr latency).

        Or is there some odd weighting going on?
        The full data set can be found @ https://openbenchmarking.org/result/...SP-1901152SP62 looks like the link originally got malformed in the article but now updated it to make it more clear.

        And the geometric mean algo can be found @ https://github.com/phoronix-test-sui...s_math.php#L25, no weighting.
        Michael Larabel
        https://www.michaellarabel.com/

        Comment


        • #24
          Originally posted by Michael View Post

          And the geometric mean algo can be found @ https://github.com/phoronix-test-sui...s_math.php#L25, no weighting.
          The algorithm is wrong for the "less is better" test cases, you have to take the reciprocal ...

          Comment


          • #25
            Originally posted by StefanBruens View Post

            The algorithm is wrong for the "less is better" test cases, you have to take the reciprocal ...
            PTS is already doing that in a different section of the code prior to passing it to that function.
            Michael Larabel
            https://www.michaellarabel.com/

            Comment


            • #26
              Originally posted by edwaleni View Post
              Good start here, but it seems some data is needed to get more context:

              - Packet or frame sizing (see jumbo request above).
              - Forwarding rate of the switch
              - Is TCP offload active?
              - What was the load on the server at measured connection value?

              Perhaps some of the reports that Kevin Tolly does on network benchmarking @ http://reports.tolly.com/LatestReports.aspx might help.
              Most of the reports are free and only require an email registration.

              I am not asking Phoronix to be as complete at Tolly is as that is not practical in the sense of how PTS is structured.

              SmallNetBuilder uses IxChariot by Ixia, but again that is probably beyond PTS since it is a commercial product and not open source.

              I just read about the Ethr tool last week for the first time, so I will have to examine it some more.
              It would also be helpful to know the physical path through the server from the SFP to the CPU, so links to vendor data would be helpful here. It matters because motherboards are not all alike. An example of those differences would be clocking differences and Infinity Fabric speed issues on AM4 boards with ZEN CPUs. Supermicro, for one, includes a simple diagram showing the various major chips on their boards and the datapaths/PCIe lane counts between them. Sometimes those diagrams will show a datapath that you don't expect, like a PLX chip (PCIe multiplexor) or a dual-CPU server with all network IO handled through 1 of the 2 CPU chips; Sun did that years ago and some older Intel dual-CPUs designs were like that.

              So that means checking which CPU cores are running the tests and which CPU cores are handling network IO. In a single CPU server it should all be in the same chip, but in a dual CPU server there might be an intermediary fabric to cross which should be "transparent" at 10Gbps speeds.

              As for throughput of the switch, if only 2 ports are being used then overall throughput of the ASIC(s) in the switch is meaningless. Any switch ASIC vendor worth their silicon at least gets port-to-port throughput at full "line speed" (or almost) right. Overall ASIC throughput is only a factor if all ports on the switch are being pushed to their full "line speed"; that's when you can find strange throttling behaviors in ASICs.

              Frame size also factors into ASIC throughput. Not all ASICs are capable of full "line speeds" at the smallest frame sizes. It was not uncommon in the past to see switch ASIC throughput ramp up from 64 byte frames, level off around 256 to 512 bytes (in most cases) and be flat ("line speed") from there to their largest frame sizes.

              Comment


              • #27
                Originally posted by kallisti5 View Post
                You forgot to test Haiku (https://haiku-os.org) . They have a 10Gbit network driver now in the latest nightlies :-)



                emulex oce cards (SFP+)
                Actually most mainstream Linux distributions should have some degree of 10Gbps support right now, assuming the distro creators included those modules.

                As always, everyone will have a favorite OS that they think Michael ought to test, but they fail to realize his time and resources are limited. If I remember his year-end stats right, he is publishing multiple articles every day on average, and publishing them at all hours of the day and night. By keeping the same OS choices in his testbed he learns all the little details of how they work and can compare OS version tests over a long period of time. That long baseline of tests is critical to understanding the evolution of Linux. Then there is the challenge of managing a fleet of test servers. Even with his automation that can be a time-consuming job since automation does not replace physical labor (changing cards, cables, CPUs, building racks, and so on).

                So would be nice if Michael can test "XYZ" OS? Sure it would. You just have to pay him for his time to do it and I bet he will find the time.

                I, for one, am quite satisfied with the tests that Michael publishes.

                Now if I can find a way that does not involve one of those (IMHO) sketchy electronic payment schemes while preserving my privacy.....

                Comment


                • #28
                  Michael, the next time you run 10GbE tests, can you compare to Windows too? I have never been able to get good network speed on Windows.

                  Comment


                  • #29
                    Originally posted by cen1 View Post
                    I still don't understand why anyone would run Fedora Server..
                    someone who wants to be protected from vulnerabilities by hardening?
                    i don't understand why anyone would use clear linux which is clearly optimized for one brand of cpus

                    Comment


                    • #30
                      Originally posted by ThoreauHD View Post
                      Which is also why I don't understand why Ubuntu Server/RHEL/SLES isn't on there. Just some random ass desktop OS's thrown together.
                      centos is rhel with different branding. i don't understand why do you post when you don't understand

                      Comment

                      Working...
                      X