Arch recommends using none with nvme, deadline with sata ssds, and bfq with hard drives. It would be interesting to see if this is optimal performance wise.
Announcement
Collapse
No announcement yet.
Linux 5.6 I/O Scheduler Benchmarks: None, Kyber, BFQ, MQ-Deadline
Collapse
X
-
Originally posted by thelongdivider View PostArch recommends using none with nvme, deadline with sata ssds, and bfq with hard drives. It would be interesting to see if this is optimal performance wise.
https://wiki.archlinux.org/index.php..._I/O_scheduler
I'm a loyal CentOS / Fedora user but Arch documentation is really amazing!
- Likes 3
Comment
-
Originally posted by thelongdivider View PostArch recommends using none with nvme, deadline with sata ssds, and bfq with hard drives. It would be interesting to see if this is optimal performance wise.
https://wiki.archlinux.org/index.php..._I/O_scheduler
There is no flat recommendation. When using a hard disk with ext4 as your workstation drive, then BFQ low latency might be the best choice. Hard disks for storage with ext4 might prefer single queue CFQ. SATA SSD with xfs might prefer none, but perhaps you might still want to use BFQ low latency. For any flash storage I use none because it has no overhead and it always works at least well enough, but you might have an entirely different philosophy.
- Likes 2
Comment
-
With faster storage, the amount of time spent on I/O scheduling becomes a larger percentage of time spent on the operation. In that sense, it doesn't really surprise me that none is doing better. I'm sure there is a point where the benefit of scheduling goes away as random reads and writes get faster and have less latency where the cost of scheduling is worse than just doing the operations in order.
Comment
-
Originally posted by polarathene View Post
Does NVMe not suffer responsiveness issues with `none` like SATA disks? This benchmark shows it's great for performance, but has no indication of the impact on responsiveness under load when trying to do anything else via GUI.
- Likes 1
Comment
-
Originally posted by J.G. View Post
Note that Arch as a project does not recommend this; they tend to go with upstream defaults whenever possible, so they "recommend" what Linux does. This is an Archwiki contributor showing examples on how to set udev rules for different block devices.
There is no flat recommendation. When using a hard disk with ext4 as your workstation drive, then BFQ low latency might be the best choice. Hard disks for storage with ext4 might prefer single queue CFQ. SATA SSD with xfs might prefer none, but perhaps you might still want to use BFQ low latency. For any flash storage I use none because it has no overhead and it always works at least well enough, but you might have an entirely different philosophy.
Anyways, all I was saying is that none is the best choice for nvme, and that if like to see benchmarks on other media to validate the wiki
Comment
-
Originally posted by piotrj3 View Post
Actually, you would have to make really extreme test case to see such problem. NVMe Up to specification supports 65535 queues and 65535 orders per queue. AHCI supports only one queue. Now good luck making a test case where all that NVMe has responsive problem especially that synchronization locks do not happen on NVMe.
Comment
-
Just finished testing on my system, just out of curiosity. On my Haswell 4700MQ based laptop with SATA SSD it turns out that BFQ gives best performance, followed closely by NONE and a little more distant from that is MQ Deadline. This is just by timing ordinary tasks, and repeating those tasks exactly using scripting for each scheduler.
BFQ also keeps the user interface more responsive but this is not really measurable, mostly a personal experience.
- Likes 1
Comment
-
Originally posted by Spam View PostYes, that would be interesting. HDDs still offer far more economical bulk storage.
- Likes 1
Comment
Comment