Announcement

Collapse
No announcement yet.

10GbE Linux Networking Performance Between CentOS, Fedora, Clear Linux & Debian

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by jbennett View Post
    Also, Fedora is essentially RHEL/CentOS-next.
    no. fedora to rhel/centos is upstream, like debian to ubuntu

    Comment


    • #32
      For proper utilization of 10GbE you need to do some work yourself:
      1. Increase MTU to the hardware limits, although 9000 is a good bet
      2. Use maximum supported ring parameters (ethtool -g/-G)
      3. Set number of channels to number of CPU cores per NUMA core (ethtool -l/-L)
      4. Pin the channel IRQs to said CPU cores on the NUMA core the NIC is connected to
      5. Also pin the application that is transmitting data to CPU threads on said NUMA core
      When you do this right I am pretty sure you will get great performance regardless of distribution or (recent) kernel version.

      Comment


      • #33
        Originally posted by ypnos View Post
        For proper utilization of 10GbE you need to do some work yourself:
        1. Increase MTU to the hardware limits, although 9000 is a good bet
        2. Use maximum supported ring parameters (ethtool -g/-G)
        3. Set number of channels to number of CPU cores per NUMA core (ethtool -l/-L)
        4. Pin the channel IRQs to said CPU cores on the NUMA core the NIC is connected to
        5. Also pin the application that is transmitting data to CPU threads on said NUMA core
        When you do this right I am pretty sure you will get great performance regardless of distribution or (recent) kernel version.
        /+100!

        On the RX side pin-app-and-irq-to-adjacent-numa-ID alone can be difference between barely reaching 5-6Gbps and maxing out the machine ~200Gbps.
        ... And yes, a dual Xeon machine running Fedora Server 27 can passively monitor 16+ x 10Gbps (or 4 x 40Gbps) NICs with near zero packet loss.

        - Gilboa
        oVirt-HV1: Intel S2600C0, 2xE5-2658V2, 128GB, 8x2TB, 4x480GB SSD, GTX1080 (to-VM), Dell U3219Q, U2415, U2412M.
        oVirt-HV2: Intel S2400GP2, 2xE5-2448L, 120GB, 8x2TB, 4x480GB SSD, GTX730 (to-VM).
        oVirt-HV3: Gigabyte B85M-HD3, E3-1245V3, 32GB, 4x1TB, 2x480GB SSD, GTX980 (to-VM).
        Devel-2: Asus H110M-K, i5-6500, 16GB, 3x1TB + 128GB-SSD, F33.

        Comment


        • #34
          Are there ways to automate those settings? Otherwise it's useless for automated testing.

          Comment


          • #35
            Originally posted by fuzz View Post
            Are there ways to automate those settings? Otherwise it's useless for automated testing.
            Yes, but its fairly complex.
            You can locate the NUMA ID of the PCI-E slot from lspci, compare that to the NUMA information from lscpu - this will give you the closest NUMA ID CPU.
            Now use ethtool to reduce the number of RSS queues (to the number of cores on that NUMA node) and use irq_affinity to match assign one IRQ per CPU core.
            Add some additional ethtool magic to configure the ring parameters, etc. And you should be done.

            - Gilboa
            oVirt-HV1: Intel S2600C0, 2xE5-2658V2, 128GB, 8x2TB, 4x480GB SSD, GTX1080 (to-VM), Dell U3219Q, U2415, U2412M.
            oVirt-HV2: Intel S2400GP2, 2xE5-2448L, 120GB, 8x2TB, 4x480GB SSD, GTX730 (to-VM).
            oVirt-HV3: Gigabyte B85M-HD3, E3-1245V3, 32GB, 4x1TB, 2x480GB SSD, GTX980 (to-VM).
            Devel-2: Asus H110M-K, i5-6500, 16GB, 3x1TB + 128GB-SSD, F33.

            Comment


            • #36
              Originally posted by Britoid View Post
              Interesting that the RHEL-based distro's are last. Some kernel configuration maybe?
              CentOS uses very old kernel.
              Thjis is done for more stability, but at a cost of performance.
              CentOS should come out this year and balance this.

              Comment


              • #37
                Originally posted by cen1 View Post
                I still don't understand why anyone would run Fedora Server..
                Testing out the new stuff, maybe.
                Actually running it, not unless it's something not critical and you like to like on the edge...

                Comment


                • #38
                  Originally posted by ThoreauHD View Post

                  Which is also why I don't understand why Ubuntu Server/RHEL/SLES isn't on there. Just some random ass desktop OS's thrown together.
                  SLES and Ubuntu yes.
                  RHEL is basically CentOS.

                  Comment


                  • #39
                    Originally posted by pegasus View Post
                    Congrats for expanding into new benchmarking territory, but there are new dragons here. These numbers all seem way too low. I regularly max out 100Gbit on old ivy bridge storage nodes running centos 6 and that's with less than 30min spent on tuning them. 10Gbit today can be maxed out with a single core ...
                    Maybe you can post your suggestions on optimizations and why you choose those?

                    Comment


                    • #40
                      There are many online. Broadcom have theirs, Intel have theirs, Mellanox have theirs ... They're mostly the same, tunning your tcp stack settings and congestion algorithms based on lan or wan scenarios, pinning nic interrupt processing threads to specific cores, enlarging the nic queue lengths, making sure that offloads are enabled etc. Mellanox even has a script that does all of that for you.

                      Comment

                      Working...
                      X