Announcement

Collapse
No announcement yet.

Linux 5.0 I/O Scheduler Benchmarks On Laptop & Desktop Hardware

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Linux 5.0 I/O Scheduler Benchmarks On Laptop & Desktop Hardware

    Phoronix: Linux 5.0 I/O Scheduler Benchmarks On Laptop & Desktop Hardware

    Our past tests have shown that while most Linux distributions default to "none" for their I/O scheduler on NVMe solid-state storage, that isn't necessarily the best scheduler decision in all cases. Here are tests using the Linux 5.0 Git kernel using laptop and desktop hardware while evaluating no I/O scheduler, mq-deadline, Kyber, and BFQ scheduler options.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Thanks Michael. Great job as usual, time to revisit my defaults
    I wonder whether it's possible to probe CPU utilisation of scheduler thread to see how costly they are.

    Comment


    • #3
      It should be noted that BFQ with low_latency=1 is actually the default it's a bit misleading to label a non-standard configuration of BFQ as just "BFQ" and the default configuration as "BFQ low-latency".
      Last edited by Space Heater; 22 July 2019, 02:21 PM.

      Comment


      • #4
        Was the laptop fitted with a SATA drive?

        Comment


        • #5
          Could the tests be done on some spinning rust drives? Since Linux 5.0 does away with the SQ schedulers that would be a boon for us still using spinners.

          Comment


          • #6
            Originally posted by DoMiNeLa10 View Post
            Was the laptop fitted with a SATA drive?
            Both NVMe; Corsair Force MP500 for the desktop, and Samsung PM961 for the laptop. Not relevant for me.

            Comment


            • #7
              Originally posted by Rallos Zek View Post
              Could the tests be done on some spinning rust drives? Since Linux 5.0 does away with the SQ schedulers that would be a boon for us still using spinners.
              Yes please. Some of us still use spinners (data and backup drives in my case). 4.20 and newer defaults to none for both my SATA SSDs and HDDs, and have to manually configure the proper schedulers. Right now I'm falling back to SQ schedulers, since the new MQ ones have rather bad performance for me when my spinners get heavy usage, and forget about my USB3 drives (performance is simply horrible with those).

              Comment


              • #8
                Scheduler tests (I/O or preemptive) always give me anxiety, there's so much variance and regressions, as a result you can't choose best, there is no best, so I like to keep it simple to myself: use 'None' for solid state disks and mq-deadline for spinning disks. Also I stay away from Btrfs as it for some reason introduces latency and lag contributing in audio skipping or mouse cursor lag under background I/O.

                Comment


                • #9
                  53 seconds to start xterm? That surely has to be a bug!

                  Comment


                  • #10
                    Isn't the performance of the OS I/O schedulers heavily influenced by the particular firmware algorithms used in the SSD controller? And doesn't that make these benchmark measurements of the performance of the OS I/O schedulers only relevant for the particular SSDs used in your testing?

                    If so, wouldn't it be better to perform these benchmarks on a bare eMMC chip or an SD card where there is no sophisticated firmware software between the OS I/O scheduler and the flash memory?

                    Comment

                    Working...
                    X