Announcement

Collapse
No announcement yet.

Linux 5.0 HDD I/O Scheduler Benchmarks - BFQ Takes The Cake

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by sa666666 View Post
    Can anyone tell me, with some certainty, which is the best scheduler to use for a development machine with either a SATA SSD or an NVME SSD? It seems to change every time a new set of benchmarks comes out. Right now I just set it to 'None'. is there something better??
    No one can say with certainty because of the benchmarks changing, as you say. Also different people use software in different ways.

    I would use "none" unless you have problems with very high disk IO usage. Then I would probably use Kyber on an SSD. But that's just me.

    It is pretty hard to imagine a workload on a personal machine that would cause significant problems for an NVMe drive. Maybe you have to run some kind of test series that involves copying gigabytes of files or creating large test databases. If so I hope you are running a Pro type NVMe (like a Samsung 970 Pro or Intel Datacenter drive) and not one of those new Intel QLC drives. You'd kill a consumer type drive pretty quickly if it has to commit 20+ GB to disk for every test run.

    Comment


    • #22
      You used very old drive models (I guess no new spindles purchased).

      However thank you for doing this benchmark.

      The reason I am here, and looked at the benchmarks, is I upgraded a server lately which was using the previous default of CFQ, and noticed it only had mq-deadline and none as options post upgrade with mq-deadline been the default. The reason I even schecked the scheduler is I observed io-wait had skyrocketed.

      The server in question is hosting game mod files, varying in size from a few megabytes up to multiple gigabytes, alot of the data resides in the file cache, but about 20% of it has to be read as not enough ram in the VM.

      Also the server is slightly less responsive to commands in the CLI.

      So interestingly as has been pointed out the ubuntu page seems to indicate BFQ is only useful for desktop's, devuan seems to relegate it to the point you have to manually load the module, its not even listed as an available scheduler on the distributed kernel. I am guessing there is a huge emphasis only on nand storage now, and perhaps barely any testing was done by the kernel devs?

      Comment


      • #23
        Originally posted by chrcoluk View Post
        So interestingly as has been pointed out the ubuntu page seems to indicate BFQ is only useful for desktop's, devuan seems to relegate it to the point you have to manually load the module, its not even listed as an available scheduler on the distributed kernel. I am guessing there is a huge emphasis only on nand storage now, and perhaps barely any testing was done by the kernel devs?
        I believe the difference in schedulers you see is because multiqueue (MQ) became the default in more recent kernels. CFQ was a single-queue scheduler only. I believe BFQ is its multiqueue spiritual successor.

        Here's what I use as a udev rule for mine. If the module is available it will autoload. You can also test it by doing
        Code:
        echo bfq > /sys/block/sda/queue/scheduler
        Code:
        # cat /etc/udev/rules.d/iosched.rules
        ACTION=="add|change", SUBSYSTEM=="block", KERNEL=="sd?", ATTR{queue/scheduler}="bfq"

        Comment


        • #24
          Yep that looks handy thank you.

          Devuan dont have that in place, it just loads up mq-deadline and none, with mq-deadline picked by default.

          Code:
          # cat /etc/udev/rules.d/iosched.rules
          cat: /etc/udev/rules.d/iosched.rules: No such file or directory

          Comment

          Working...
          X