Announcement

Collapse
No announcement yet.

Linux 3.16: Deadline I/O Scheduler Generally Leads With A SSD

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Linux 3.16: Deadline I/O Scheduler Generally Leads With A SSD

    Phoronix: Linux 3.16: Deadline I/O Scheduler Generally Leads With A SSD

    There's been numerous requests lately for more disk I/O scheduler benchmarks on Phoronix of the Linux kernel and its various scheduler options. Given that there's routinely just speculation and miscommunication by individuals over the best scheduler for HDDs/SSDs, here's some fresh benchmarks for reference using the Linux 3.16 kernel.

    http://www.phoronix.com/vr.php?view=20638

  • #2
    Originally posted by phoronix View Post
    Phoronix: Linux 3.16: Deadline I/O Scheduler Generally Leads With A SSD

    There's been numerous requests lately for more disk I/O scheduler benchmarks on Phoronix of the Linux kernel and its various scheduler options. Given that there's routinely just speculation and miscommunication by individuals over the best scheduler for HDDs/SSDs, here's some fresh benchmarks for reference using the Linux 3.16 kernel.

    http://www.phoronix.com/vr.php?view=20638
    This is interesting, I seem to recall Noop being the fastest on SSDs? I might need to change my Udev rule now.

    Comment


    • #3
      I don't understand how you can conclude that it's the SSD that makes deadline faster since you didn't test any HDD. Maybe it's the same story with a HDD and the difference between the schedulers lies somewhere else.

      Comment


      • #4
        Is Deadline the default I/O scheduler in Linux?

        Comment


        • #5
          Originally posted by peppercats View Post
          Is Deadline the default I/O scheduler in Linux?
          No it's CFQ. He has done benchmarks before that shows that CFQ is the fastest in most scenarios on HDDs.

          Comment


          • #6
            Originally posted by peppercats View Post
            Is Deadline the default I/O scheduler in Linux?
            On Ubuntu I think it is. The other distros and the kernel all default to CFQ, I think.

            Comment


            • #7
              Originally posted by xeekei View Post
              No it's CFQ. He has done benchmarks before that shows that CFQ is the fastest in most scenarios on HDDs.
              so if I only use an SSD I should switch to deadline?

              Comment


              • #8
                Originally posted by peppercats View Post
                so if I only use an SSD I should switch to deadline?
                It seems so. It's possible to write a udev rule that detects if it's a rotational medium or solid state, and set the scheduler accordingly. I have Noop for SSDs and CFQ for HDDs. I think it will choose Noop for USB thumbsticks and the like too.

                Comment


                • #9
                  I have a eepc notebook with extremely slow SSD. What I've noted over the years is that I don't really care how fast the disk is, but whether or now the desktop remains responsive when a backgroud process is pounding the disk. Typically, can I continue web browsing while synaptic/apt is updating packages?

                  In the above scenario I've noted that changing scheduling and priority for synaptic and children had a large effect. Also note that on the same machine with windows xp while starting firefox/thunderbird the machine, even the mouse pointer freezes for 10 - 30 sec. (due to long stall caused by windows fsync equivalent), while doing the same on Linux is workable.

                  I wonder if such a scenario if sufficiently convered by the bench marks.

                  Comment


                  • #10
                    CFQ is supposedly already tweaked for SSDs.

                    From https://www.kernel.org/doc/Documenta...fq-iosched.txt

                    CFQ has some optimizations for SSDs and if it detects a non-rotational
                    media which can support higher queue depth (multiple requests at in
                    flight at a time), then it cuts down on idling of individual queues and
                    all the queues move to sync-noidle tree and only tree idle remains. This
                    tree idling provides isolation with buffered write queues on async tree.

                    Comment


                    • #11
                      Originally posted by xeekei View Post
                      It seems so. It's possible to write a udev rule that detects if it's a rotational medium or solid state, and set the scheduler accordingly. I have Noop for SSDs and CFQ for HDDs. I think it will choose Noop for USB thumbsticks and the like too.
                      Code:
                      [root@hydragiros ~]# cat /etc/udev/rules.d/60-io_schedulers.rules 
                      # Set deadline scheduler for non-rotating disks
                      ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="deadline"
                      ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0", ATTR{queue/iosched/fifo_batch}="1"
                      # Set cfq scheduler for rotating disks
                      ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="1", ATTR{queue/scheduler}="cfq"
                      That's better. Pay attention to the line coming after the "deadline" setting, that line improves performance even more

                      Comment


                      • #12
                        These benchmarks measure throughput, but not responsiveness under load. Someone only interested in throughput will find the benchmark results helpful. If responsiveness is required, more research would be needed - from their implementations, I'd expect CFQ to better at that, but I have no data to back it up.

                        Comment


                        • #13
                          Originally posted by rohcQaH View Post
                          These benchmarks measure throughput, but not responsiveness under load. Someone only interested in throughput will find the benchmark results helpful. If responsiveness is required, more research would be needed - from their implementations, I'd expect CFQ to better at that, but I have no data to back it up.
                          No scientific data to back my claim, but with 3 good ol' SATA 5 -year-old HDDs, I've tried cfq, bfq (with and without the bfs) and noop, deadline is by far the most responsive under load.

                          Comment


                          • #14
                            benchmark

                            What about some other benchmark. Like destroying SSD, HDD or something.

                            Comment


                            • #15
                              Originally posted by xeekei View Post
                              This is interesting, I seem to recall Noop being the fastest on SSDs? I might need to change my Udev rule now.
                              deadline always was the better io schedular, at least for ssds.

                              Comment

                              Working...
                              X