Announcement

Collapse
No announcement yet.

Linux Distributions vs. BSDs With netperf & iperf3 Network Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #91
    Originally posted by F.Ultra View Post
    There must definitely be something strange going on between Linux and the particular NIC used in the tested machine.
    Would not be first or even tenth time when there has been some regression in Intel's NIC Linux driver.. Just googling "intel nic regression linux" returns near 200k results.

    Comment


    • #92
      Originally posted by indepe View Post
      Unexpectedly, one of the computers in my local network responds to ping requests (UDP). The time I get with Fedora 25 is 0.4 ms, which I guess corresponds to a transaction rate of roughly 2,500 / sec, somewhere midway between your 19,000 / sec and Michael's ~150 / sec on F25. These differences are enormous.

      (EDIT: Fedora 25 Workstation, that is.)
      For normal ping I have at the moment:
      Code:
      --- lon.x.com ping statistics ---
      62 packets transmitted, 62 received, 0% packet loss, time 61000ms
      rtt min/avg/max/mdev = 0.039/0.090/0.128/0.021 ms
      This is icmp pings however and not udp or tcp ones and the switch used is a HP 2910al-24G.

      Comment


      • #93
        Originally posted by F.Ultra View Post

        For normal ping I have at the moment:
        Code:
        --- lon.x.com ping statistics ---
        62 packets transmitted, 62 received, 0% packet loss, time 61000ms
        rtt min/avg/max/mdev = 0.039/0.090/0.128/0.021 ms
        This is icmp pings however and not udp or tcp ones and the switch used is a HP 2910al-24G.
        Well both are SOCK_DGRAM so I wasn't distinguishing UDP and ICMP.....however that sounds like Ping might have quite some overhead compared to repeated single byte UDP and TCP exchanges.

        So 0.4 ms might also correspond to 0.09 / 0.4 * 19,200 /sec = 4,320 /sec. Even better but still far from 19,200 /sec.

        What's your network card, if I may ask? Something like Intel I350?

        Comment


        • #94
          Originally posted by indepe View Post

          Well both are SOCK_DGRAM so I wasn't distinguishing UDP and ICMP.....however that sounds like Ping might have quite some overhead compared to repeated single byte UDP and TCP exchanges.

          So 0.4 ms might also correspond to 0.09 / 0.4 * 19,200 /sec = 4,320 /sec. Even better but still far from 19,200 /sec.

          What's your network card, if I may ask? Something like Intel I350?
          It's a i350 chip yes (SuperMicro X10RW-E motherboard with built in NICs). Could be that the Linux kernel is prioritizing tcp/udp packets over icmp since good performance is way more important there than for "ping", also since the TCP_RR test is done on an established connection the data stream is going through less hoops in the kernel than a random icmp ping would.

          Comment


          • #95
            Originally posted by Pawlerson View Post
            Two questions:

            1. Was firewall enabled in benchmarked systems? (in Linux distributions it's usually enabled, but it's probably not enabled by default in FreeBSD).
            2. Does it matter?
            For the second question: Yes it does matter, as you and your retinue are always pouring so much hatred over ***BSD and, to a larger extend, Solaris. When it comes to other OSes (BSD, Solaris excluded) Linux is just one OS among others.

            Comment


            • #96
              Sorry for resume an old thread, but this test is wrong on so many levels.
              • You may wonder why in so many tests, the results are the same for every OS: this is because you're reaching the line rate.
              • The network cards have multiple hardware queues, and put the received packets in a queue depending on the L3/L4 hash. Every hardware queue is handled by a kernel thread, so if the traffic comes from few connections, you may use only a few cores. This is a standard technology named RSS
              • Software like iperf3 and netperf can handle only the fraction of the traffic of a real traffic generator, they are tiny toys compared to, eg. Anritsu or Xena
              • In the networking industry, the measure unit is always the packet per second (and its multiplies) , not bit per second. This because handling a packet needs the same effort if it's a 64 byte or a 9000 byte jumbo packet. If you get the datasheet of any router, NIC or software, the specs always says mpps (millions of packets per second), and not gbit/s
              If you don't want, or you can't get an HW traffic generator, you may get some much better result with iperf by limiting the UDP packet size or the card MTU to something really small, like 64 byte, and post the results in mpps instead of mbit. And be sure to use a very high connection count, at least 4x the number of CPUs, if you want that the hardware queues are evenly filled and all the CPUs are working.

              If you are just curious, this is an IP forwarding test made with 5000 connections and a 10 Gbit card, with a real generator


              Comment

              Working...
              X