Announcement

Collapse
No announcement yet.

Linux 5.0 HDD I/O Scheduler Benchmarks - BFQ Takes The Cake

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • chrcoluk
    replied
    Yep that looks handy thank you.

    Devuan dont have that in place, it just loads up mq-deadline and none, with mq-deadline picked by default.

    Code:
    # cat /etc/udev/rules.d/iosched.rules
    cat: /etc/udev/rules.d/iosched.rules: No such file or directory

    Leave a comment:


  • Zan Lynx
    replied
    Originally posted by chrcoluk View Post
    So interestingly as has been pointed out the ubuntu page seems to indicate BFQ is only useful for desktop's, devuan seems to relegate it to the point you have to manually load the module, its not even listed as an available scheduler on the distributed kernel. I am guessing there is a huge emphasis only on nand storage now, and perhaps barely any testing was done by the kernel devs?
    I believe the difference in schedulers you see is because multiqueue (MQ) became the default in more recent kernels. CFQ was a single-queue scheduler only. I believe BFQ is its multiqueue spiritual successor.

    Here's what I use as a udev rule for mine. If the module is available it will autoload. You can also test it by doing
    Code:
    echo bfq > /sys/block/sda/queue/scheduler
    Code:
    # cat /etc/udev/rules.d/iosched.rules
    ACTION=="add|change", SUBSYSTEM=="block", KERNEL=="sd?", ATTR{queue/scheduler}="bfq"

    Leave a comment:


  • chrcoluk
    replied
    You used very old drive models (I guess no new spindles purchased).

    However thank you for doing this benchmark.

    The reason I am here, and looked at the benchmarks, is I upgraded a server lately which was using the previous default of CFQ, and noticed it only had mq-deadline and none as options post upgrade with mq-deadline been the default. The reason I even schecked the scheduler is I observed io-wait had skyrocketed.

    The server in question is hosting game mod files, varying in size from a few megabytes up to multiple gigabytes, alot of the data resides in the file cache, but about 20% of it has to be read as not enough ram in the VM.

    Also the server is slightly less responsive to commands in the CLI.

    So interestingly as has been pointed out the ubuntu page seems to indicate BFQ is only useful for desktop's, devuan seems to relegate it to the point you have to manually load the module, its not even listed as an available scheduler on the distributed kernel. I am guessing there is a huge emphasis only on nand storage now, and perhaps barely any testing was done by the kernel devs?

    Leave a comment:


  • Zan Lynx
    replied
    Originally posted by sa666666 View Post
    Can anyone tell me, with some certainty, which is the best scheduler to use for a development machine with either a SATA SSD or an NVME SSD? It seems to change every time a new set of benchmarks comes out. Right now I just set it to 'None'. is there something better??
    No one can say with certainty because of the benchmarks changing, as you say. Also different people use software in different ways.

    I would use "none" unless you have problems with very high disk IO usage. Then I would probably use Kyber on an SSD. But that's just me.

    It is pretty hard to imagine a workload on a personal machine that would cause significant problems for an NVMe drive. Maybe you have to run some kind of test series that involves copying gigabytes of files or creating large test databases. If so I hope you are running a Pro type NVMe (like a Samsung 970 Pro or Intel Datacenter drive) and not one of those new Intel QLC drives. You'd kill a consumer type drive pretty quickly if it has to commit 20+ GB to disk for every test run.

    Leave a comment:


  • sa666666
    replied
    Can anyone tell me, with some certainty, which is the best scheduler to use for a development machine with either a SATA SSD or an NVME SSD? It seems to change every time a new set of benchmarks comes out. Right now I just set it to 'None'. is there something better??

    Leave a comment:


  • M@yeulC
    replied
    Originally posted by nuetzel View Post

    Sorry, but who use swap in these times (maybe apart from _big_ iron)? - Ten years ago or even around 1993 as I started using and _developing_ Linux, but today? Use SSDs as much as you can. And starting with Linux kernel 2.4 (devel) then 2.5/2.6 we had GREAT preemptive stuff which made desktops mostly smooth.
    I think that everyone should use swap, even if only compressed memory. In my experience, performance under memory pressure is even worse (I believe the stack is evicted from memory, and read from disk instead) if there is none. And with a well-sized computer, you might use ~50% of the RAM on average, but there's always the outlier task, or the odd program with a memory leak, and that shouldn't bring your system down.

    Unfortunately, as much as I would like to use SSDs, they are still impractical in many cases. I don't use one in my main computer, but it is also to make me aware of the shortcomings of the software I use. Many embedded platforms cannot afford the luxury of SSDs, or a lot of memory, so I also had some swapping issues there. And lastly, that doesn't help with my live USB stick example. That said, I am planning to upgrade to one at some point, just like I did with my memory (8 -> 16GB when the prices started to decrease, yet I still need some swap from time to time).

    Please, don't assume that just because you didn't experience the need for it, it isn't needed. It is, and I found a couple issues with the current implementation. I also experienced the same problem on multiple servers, as well as company and university-issued computers, friend's laptops, multiple desktops. I don't think it is just me.

    But really, I think these two issues are interlinked. It is as if when writing/reading, big buffers are allocated, and fill up all the available memory space, so that when I/O speed is low, there is no more memory to read data from disk into, as the buffers have to be written back before. There might also be an issue with timestamps and cache eviction, as data has to wait a fair bit before being put into memory, which might make it more likely to be evicted? I'd have to dig into the code to find answers to this, this is just a possible explanation I came up with, based on the symptoms.

    Unfortunately, it might be quite difficult to come up with a nice benchmark. I'll see if I can come up with one in a VM where I throttle I/O and memory.

    Leave a comment:


  • Zan Lynx
    replied
    Originally posted by braulio_holtz View Post
    How could I test bfq low latency? "sudo echo bfq > /sys/block/sda/queue/scheduler " not working for me
    That command never will work because it runs "echo" as root but redirects it into the scheduler file as your user ID. Which doesn't have permission.

    What you want is something like "echo bfq | sudo tee /sys/block/sda/queue/scheduler"

    Leave a comment:


  • nuetzel
    replied
    Originally posted by starshipeleven View Post
    Need to enable blk-mq schedulers first. See this https://unix.stackexchange.com/quest...heduler/376136
    And make sure you have the right module (bfq / CONFIG_IOSCHED_BFQ=m / CONFIG_BFQ_GROUP_IOSCHED=y) available or compiled.

    Leave a comment:


  • starshipeleven
    replied
    Originally posted by braulio_holtz View Post
    How could I test bfq low latency? "sudo echo bfq > /sys/block/sda/queue/scheduler " not working for me
    Need to enable blk-mq schedulers first. See this https://unix.stackexchange.com/quest...heduler/376136

    Leave a comment:


  • braulio_holtz
    replied
    How could I test bfq low latency? "sudo echo bfq > /sys/block/sda/queue/scheduler " not working for me

    Leave a comment:

Working...
X