Announcement

Collapse
No announcement yet.

Linux 3.16: Deadline I/O Scheduler Generally Leads With A SSD

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Linux 3.16: Deadline I/O Scheduler Generally Leads With A SSD

    Phoronix: Linux 3.16: Deadline I/O Scheduler Generally Leads With A SSD

    There's been numerous requests lately for more disk I/O scheduler benchmarks on Phoronix of the Linux kernel and its various scheduler options. Given that there's routinely just speculation and miscommunication by individuals over the best scheduler for HDDs/SSDs, here's some fresh benchmarks for reference using the Linux 3.16 kernel.

    http://www.phoronix.com/vr.php?view=20638

  • #2
    Originally posted by phoronix View Post
    Phoronix: Linux 3.16: Deadline I/O Scheduler Generally Leads With A SSD

    There's been numerous requests lately for more disk I/O scheduler benchmarks on Phoronix of the Linux kernel and its various scheduler options. Given that there's routinely just speculation and miscommunication by individuals over the best scheduler for HDDs/SSDs, here's some fresh benchmarks for reference using the Linux 3.16 kernel.

    http://www.phoronix.com/vr.php?view=20638
    This is interesting, I seem to recall Noop being the fastest on SSDs? I might need to change my Udev rule now.

    Comment


    • #3
      I don't understand how you can conclude that it's the SSD that makes deadline faster since you didn't test any HDD. Maybe it's the same story with a HDD and the difference between the schedulers lies somewhere else.

      Comment


      • #4
        Is Deadline the default I/O scheduler in Linux?

        Comment


        • #5
          Originally posted by peppercats View Post
          Is Deadline the default I/O scheduler in Linux?
          No it's CFQ. He has done benchmarks before that shows that CFQ is the fastest in most scenarios on HDDs.

          Comment


          • #6
            Originally posted by peppercats View Post
            Is Deadline the default I/O scheduler in Linux?
            On Ubuntu I think it is. The other distros and the kernel all default to CFQ, I think.

            Comment


            • #7
              Originally posted by xeekei View Post
              No it's CFQ. He has done benchmarks before that shows that CFQ is the fastest in most scenarios on HDDs.
              so if I only use an SSD I should switch to deadline?

              Comment


              • #8
                Originally posted by peppercats View Post
                so if I only use an SSD I should switch to deadline?
                It seems so. It's possible to write a udev rule that detects if it's a rotational medium or solid state, and set the scheduler accordingly. I have Noop for SSDs and CFQ for HDDs. I think it will choose Noop for USB thumbsticks and the like too.

                Comment


                • #9
                  I have a eepc notebook with extremely slow SSD. What I've noted over the years is that I don't really care how fast the disk is, but whether or now the desktop remains responsive when a backgroud process is pounding the disk. Typically, can I continue web browsing while synaptic/apt is updating packages?

                  In the above scenario I've noted that changing scheduling and priority for synaptic and children had a large effect. Also note that on the same machine with windows xp while starting firefox/thunderbird the machine, even the mouse pointer freezes for 10 - 30 sec. (due to long stall caused by windows fsync equivalent), while doing the same on Linux is workable.

                  I wonder if such a scenario if sufficiently convered by the bench marks.

                  Comment


                  • #10
                    CFQ is supposedly already tweaked for SSDs.

                    From https://www.kernel.org/doc/Documenta...fq-iosched.txt

                    CFQ has some optimizations for SSDs and if it detects a non-rotational
                    media which can support higher queue depth (multiple requests at in
                    flight at a time), then it cuts down on idling of individual queues and
                    all the queues move to sync-noidle tree and only tree idle remains. This
                    tree idling provides isolation with buffered write queues on async tree.

                    Comment

                    Working...
                    X