Announcement

Collapse
No announcement yet.

Linux Distributions vs. BSDs With netperf & iperf3 Network Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #71
    Btw, my original point was to invalidate your claim about new features not being added to ZFS By now, we have had 2 pages worth of text about ZFS and it should be properly disproved.

    Originally posted by starshipeleven View Post
    ZFS took a ton of shortcuts to get a product out and its current limitations aren't fixable because they are due to design choices. ZFS isn't fit for "any" usage scenario and will never be. It was cut down to fit only a specific usecase and with specific assumptions.
    Weird how you find it to be utterly negative thing in ZFS while systemd design philosophy is pretty much the same and you are ready to go trough fire and copper pipes because of systemd. No need to reply, just an idle thought. It's already off-topic enough.
    Last edited by aht0; 10 December 2016, 09:07 AM.

    Comment


    • #72
      Originally posted by aht0 View Post
      sadly, you are defining the term "fanboy". When you have zero arguments left, you always resort to plain bullying, insults or sneering.
      There's huge difference between me and some retarded monkeys like birdie (who's not even BSD fanboy, but Windows one and he just hates Linux). I always have arguments, otherwise I wouldn't be trying to prove others wrong. However, I'm sometimes depressed when I'm realizing how stupid some people are. You seem to be sane, so don't feel offended. Some examples:

      What was tested in this benchmark was CLIENT side performance not server. Phoronix:
      The same netperf/iperf3 server was used for the duation of the testing with this article looking at primarily the client performance.
      The server was running Ubuntu Linux, so mentioning netflix or saying FreeBSD is better in performance when comes to servers is simply idiotic. Furthermore, benchmarking system defaults in this case isn't too informative. I bet none of the above Linux distributions have good defaults for such test (client side performance).

      It'd be pretty stupid to turn on firewall while benchmarking network performance. Do you want untainted results or not? Or maybe stream 4K movies as well while benchmarking? Just to kill the time while benchmark is running.
      If both have it disabled then ok.

      And I'll remind you that BY DEFAULT FreeBSD kernel variables are quite un-optimized in turn. Not to mention slightly less-optimized binaries compared to binaries compiled using GCC.
      FreeBSD is quite good optimized out of the box for network performance. However, there are dozens of different tunables in both operating systems.

      By simply switching the kernel to lowlatency results are much better in some of the tests:

      netperf0
      Intel Core i5-4200M testing with a LENOVO Durian 7A1 and Intel HD 4600 on Ubuntu 16.04 via the Phoronix Test Suite.


      generic:

      Processor: Intel Core i5-4200M @ 3.10GHz (4 Cores), Motherboard: LENOVO Durian 7A1, Chipset: Intel Xeon E3-1200 v3/4th, Memory: 4096MB, Disk: 1000GB Seagate ST1000LM014-SSHD, Graphics: NVIDIA GeForce GT 745M 2048MB (135/405MHz), Audio: Intel Xeon E3-1200 v3/4th, Network: Qualcomm Atheros QCA8171 Gigabit + Intel Wireless 7260

      OS: Ubuntu 16.04, Kernel: 4.4.0-38-generic (x86_64), Desktop: KDE Frameworks 5, Display Server: X Server 1.18.3, Display Driver: NVIDIA 361.42, Compiler: GCC 5.4.0 20160609, File-System: ext4, Screen Resolution: 1600x900

      lowlatency:

      Processor: Intel Core i5-4200M @ 3.10GHz (4 Cores), Motherboard: LENOVO Durian 7A1, Chipset: Intel Xeon E3-1200 v3/4th, Memory: 4096MB, Disk: 1000GB Seagate ST1000LM014-SSHD, Graphics: Intel HD 4600 (1150MHz), Audio: Intel Xeon E3-1200 v3/4th, Network: Qualcomm Atheros QCA8171 Gigabit + Intel Wireless 7260

      OS: Ubuntu 16.04, Kernel: 4.4.0-38-lowlatency (x86_64), Desktop: KDE Frameworks 5, Display Server: X Server 1.18.3, Display Driver: intel 2.99.917, Compiler: GCC 5.4.0 20160609, File-System: ext4, Screen Resolution: 1600x900

      Netperf 2.7.0
      Server: 169.254.122.150 - Test: TCP Stream - Server To Client - Duration: 10 Seconds
      Megabits/sec Throughput
      generic ...... 932.76 |================================================= ===========
      lowlatency ... 932.77 |================================================= ===========

      Netperf 2.7.0
      Server: 169.254.122.150 - Test: TCP Stream - Client To Server - Duration: 10 Seconds
      Megabits/sec Throughput
      generic ...... 664.46 |================================================= ===========
      lowlatency .. 663.40 |================================================= ===========

      Netperf 2.7.0
      Server: 169.254.122.150 - Test: TCP Request Response - Duration: 10 Seconds
      Transaction Rate Per Second
      generic ...... 572.88 |========================================
      lowlatency .. 862.21 |================================================= ===========

      Netperf 2.7.0
      Server: 169.254.122.150 - Test: UDP Request Response - Duration: 10 Seconds
      Transaction Rate Per Second
      generic ...... 836.44 |================================================= ========
      lowlatency .. 872.82 |================================================= ===========


      I'm not sure if those results are comparable to Phoronix ones, but it seems strange my medium end laptop performs better.

      Comment


      • #73
        Originally posted by Pawlerson View Post
        I'm not sure if those results are comparable to Phoronix ones, but it seems strange my medium end laptop performs better.
        Some Linux driver issue for Intel NIC? You seem to have markedly better results but also have different NIC in your machine (Atheros gigabit)

        Comment


        • #74
          Originally posted by aht0 View Post
          Btw, my original point was to invalidate your claim about new features not being added to ZFS By now, we have had 2 pages worth of text about ZFS and it should be properly disproved.
          As I said, I didn't explain well my point in that claim. I hope that is cleared now.

          Weird how you find it to be utterly negative thing in ZFS while systemd design philosophy is pretty much the same and you are ready to go trough fire and copper pipes because of systemd. No need to reply, just an idle thought. It's already off-topic enough.
          1. I'm not finding it "utterly negative" in ZFS. I'm just pointing out why ZFS took less time to develop than btrfs. ZFS is fine within its specific usecase.
          2. systemd's design phylosophy is "concentrate all features in a well-tested binary init so that people on servers can call even very complex functions without need to rewrite hundreds of lines of scripts every time, also doing a ton of things scripts can't do, period", I don't see how that can be compared with ZFS which is a filesystem, doing a completely different job.

          Good trolling, btw.

          Comment


          • #75
            Originally posted by starshipeleven View Post
            I don't see how that can be compared with ZFS which is a filesystem, doing a completely different job.
            fit only a specific use case and with specific assumptions. Different job was not my point.

            Originally posted by starshipeleven View Post
            Good trolling, btw.
            Call it trolling then You were pressing on 2 weaknesses in ZFS while utterly ignoring weaknesses in BTRFS. Could call it trolling too. Fact remains, one is usable and reliable, been so for a fairly long time, while other is like time-bomb ticking in your machines unless you are applying "if's" and "but's" and take precautions. For me, reliability wins over features.

            Comment


            • #76
              Could be interestering to have benchmark with 10 or 40GBits, and also bench without differents packet size.

              xdp has been included in kernel 4.8 and some nic drivers, it should improve pps x5, around 20mpps for 1core.

              Introduction to XDP XDP or eXpress Data Path provides a high performance, programmable network data path in the Linux kernel as part of the IO Visor Project. XDP provides bare metal...

              Comment


              • #77
                Originally posted by spirit View Post
                Could be interestering to have benchmark with 10 or 40GBits, and also bench without differents packet size.

                xdp has been included in kernel 4.8 and some nic drivers, it should improve pps x5, around 20mpps for 1core.

                Introduction to XDP XDP or eXpress Data Path provides a high performance, programmable network data path in the Linux kernel as part of the IO Visor Project. XDP provides bare metal...
                if only I had 10/40 Gbit equipment...
                Michael Larabel
                https://www.michaellarabel.com/

                Comment


                • #78
                  Originally posted by aht0 View Post
                  fit only a specific use case and with specific assumptions.
                  No, systemd isn't fitting only a specific usecase. It's able to deal with everything other int systems deal with, and much more situations they couldn't deal with.

                  Call it trolling then
                  No, I'm calling trolling the "let's pull systemd in too" maneuver. We went OT with filesystems, why not pull in systemd too, right?

                  You were pressing on 2 weaknesses in ZFS while utterly ignoring weaknesses in BTRFS.
                  No, I only used them to show why btrfs (that aims to be without ZFS weaknesses) is taking longer to develop because you seem to think that btrfs developers are amateurs or idiots because it is taking longer than ZFS did.

                  Fact remains, one is usable and reliable, been so for a fairly long time, while other is like time-bomb ticking in your machines unless you are applying "if's" and "but's" and take precautions.
                  And I just explained why.
                  Btrfs aims to be better than ZFS, being better than ZFS is a pain in the ass and requires significantly more work from a ton of good developers, you should take this as a compliment to ZFS, but no, you are an idiot so you think they are all stupid and take any explanation as an attack to ZFS.

                  For me, reliability wins over features.
                  For me, it's not set in stone like that. I use whatever is best now (linux + btrfs where RAID1 is ok, FreeNAS + ZFS where I need RAID5/6), but I don't piss on projects that aim to make something better for the future just because.

                  Comment


                  • #79
                    Originally posted by starshipeleven View Post
                    No, systemd isn't fitting only a specific usecase. It's able to deal with everything other int systems deal with, and much more situations they couldn't deal with.

                    No, I'm calling trolling the "let's pull systemd in too" maneuver. We went OT with filesystems, why not pull in systemd too, right?

                    No, I only used them to show why btrfs (that aims to be without ZFS weaknesses) is taking longer to develop because you seem to think that btrfs developers are amateurs or idiots because it is taking longer than ZFS did.

                    And I just explained why.
                    Btrfs aims to be better than ZFS, being better than ZFS is a pain in the ass and requires significantly more work from a ton of good developers, you should take this as a compliment to ZFS, but no, you are an idiot so you think they are all stupid and take any explanation as an attack to ZFS.

                    For me, it's not set in stone like that. I use whatever is best now (linux + btrfs where RAID1 is ok, FreeNAS + ZFS where I need RAID5/6), but I don't piss on projects that aim to make something better for the future just because.
                    specific use cases. Go outside and it's inflexible.

                    Did tell you specifically not to reply. You could not resist

                    Aims. Cat may chase a bunny and also aim getting it. Checked the mailing list and it seems they at least are putting more or less serious effort now in patching the existing bugs.

                    Comment


                    • #80
                      Originally posted by aht0 View Post
                      specific use cases. Go outside and it's inflexible.
                      Bullshit, you can swap sysvinit with systemd and it will work 100% the same with init scripts. You can even run OpenRC on top of systemd (working as sysvinit), the same as OpenRC runs on top of sysvinit.
                      It's kinda retarded as you are only using systemd's sysvinit-like compatibility mode and not the features that compose 99% of its code, but you can do it.

                      Now tell me how that's "inflexible" again.

                      Did tell you specifically not to reply. You could not resist
                      No it was a choice. In this specific case, I chose yes, there aren't more juicy trolls to bash in the forums I'm following.

                      Aims. Cat may chase a bunny and also aim getting it.
                      And for btrfs isn't the case, as I explained.

                      Checked the mailing list and it seems they at least are putting more or less serious effort now in patching the existing bugs.
                      Thanks, your opinion matters.

                      Comment

                      Working...
                      X