Announcement

Collapse
No announcement yet.

Linux 5.6 I/O Scheduler Benchmarks: None, Kyber, BFQ, MQ-Deadline

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Arch recommends using none with nvme, deadline with sata ssds, and bfq with hard drives. It would be interesting to see if this is optimal performance wise.

    Comment


    • #12
      Originally posted by thelongdivider View Post
      Arch recommends using none with nvme, deadline with sata ssds, and bfq with hard drives. It would be interesting to see if this is optimal performance wise.

      https://wiki.archlinux.org/index.php..._I/O_scheduler
      nice, thanks!

      I'm a loyal CentOS / Fedora user but Arch documentation is really amazing!

      Comment


      • #13
        Originally posted by thelongdivider View Post
        Arch recommends using none with nvme, deadline with sata ssds, and bfq with hard drives. It would be interesting to see if this is optimal performance wise.

        https://wiki.archlinux.org/index.php..._I/O_scheduler
        Note that Arch as a project does not recommend this; they tend to go with upstream defaults whenever possible, so they "recommend" what Linux does. This is an Archwiki contributor showing examples on how to set udev rules for different block devices.

        There is no flat recommendation. When using a hard disk with ext4 as your workstation drive, then BFQ low latency might be the best choice. Hard disks for storage with ext4 might prefer single queue CFQ. SATA SSD with xfs might prefer none, but perhaps you might still want to use BFQ low latency. For any flash storage I use none because it has no overhead and it always works at least well enough, but you might have an entirely different philosophy.

        Comment


        • #14
          With faster storage, the amount of time spent on I/O scheduling becomes a larger percentage of time spent on the operation. In that sense, it doesn't really surprise me that none is doing better. I'm sure there is a point where the benefit of scheduling goes away as random reads and writes get faster and have less latency where the cost of scheduling is worse than just doing the operations in order.

          Comment


          • #15
            Originally posted by polarathene View Post

            Does NVMe not suffer responsiveness issues with `none` like SATA disks? This benchmark shows it's great for performance, but has no indication of the impact on responsiveness under load when trying to do anything else via GUI.
            Actually, you would have to make really extreme test case to see such problem. NVMe Up to specification supports 65535 queues and 65535 orders per queue. AHCI supports only one queue. Now good luck making a test case where all that NVMe has responsive problem especially that synchronization locks do not happen on NVMe.

            Comment


            • #16
              Originally posted by J.G. View Post

              Note that Arch as a project does not recommend this; they tend to go with upstream defaults whenever possible, so they "recommend" what Linux does. This is an Archwiki contributor showing examples on how to set udev rules for different block devices.

              There is no flat recommendation. When using a hard disk with ext4 as your workstation drive, then BFQ low latency might be the best choice. Hard disks for storage with ext4 might prefer single queue CFQ. SATA SSD with xfs might prefer none, but perhaps you might still want to use BFQ low latency. For any flash storage I use none because it has no overhead and it always works at least well enough, but you might have an entirely different philosophy.
              Indeed. It is on the wiki page anyways. I have used it for years as it makes a lot of sense. Nvme has an inherent multi-queue structure, so scheduling is only overhead unless you have an edge use case. Sata is much more limited, so deadline can improve performance (as in much older benchmarks vs cfq). Bfq makes sense on hard drives which have a spectacular native latency.

              Anyways, all I was saying is that none is the best choice for nvme, and that if like to see benchmarks on other media to validate the wiki

              Comment


              • #17
                Originally posted by piotrj3 View Post

                Actually, you would have to make really extreme test case to see such problem. NVMe Up to specification supports 65535 queues and 65535 orders per queue. AHCI supports only one queue. Now good luck making a test case where all that NVMe has responsive problem especially that synchronization locks do not happen on NVMe.
                Why would would you have to max out queues? Throughput is easy enough, enough random I/O probably can be effective too. Not talking about responsiveness of the disk btw, that the I/O activity doesn't affect GUI usage or multi-tasking. If I'm mistaken and it's only because of a lack of I/O queues all good.

                Comment


                • #18
                  Just finished testing on my system, just out of curiosity. On my Haswell 4700MQ based laptop with SATA SSD it turns out that BFQ gives best performance, followed closely by NONE and a little more distant from that is MQ Deadline. This is just by timing ordinary tasks, and repeating those tasks exactly using scripting for each scheduler.
                  BFQ also keeps the user interface more responsive but this is not really measurable, mostly a personal experience.

                  Comment


                  • #19
                    Originally posted by Spam View Post
                    Yes, that would be interesting. HDDs still offer far more economical bulk storage.
                    The price/performance for SSD's certainly favours the smaller capacities. My home dir is ~3 TB on my workstation. Do I spend $900 on a 4 TB Samsung 960 Pro? Or do I spend $65 on a factory refurb 4 TB HGST 7k6000 enterprise HDD? $900 vs $65. Or in my case, as I'm using mdadm to RAID-1 mirror my homedir, it's actually $1800 vs. $130. For my money, HDD's are a far better value right now, at least in the 4 TB capacity point I need.

                    Comment


                    • #20
                      It would be nice to have a way to benchmark schedulers for real-world I/O tasks on NVMe instead of raw throughput...

                      Comment

                      Working...
                      X