Announcement

Collapse
No announcement yet.

Linux 3.16: Deadline I/O Scheduler Generally Leads With A SSD

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by xeekei View Post
    It seems so. It's possible to write a udev rule that detects if it's a rotational medium or solid state, and set the scheduler accordingly. I have Noop for SSDs and CFQ for HDDs. I think it will choose Noop for USB thumbsticks and the like too.
    Code:
    [root@hydragiros ~]# cat /etc/udev/rules.d/60-io_schedulers.rules 
    # Set deadline scheduler for non-rotating disks
    ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="deadline"
    ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0", ATTR{queue/iosched/fifo_batch}="1"
    # Set cfq scheduler for rotating disks
    ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="1", ATTR{queue/scheduler}="cfq"
    That's better. Pay attention to the line coming after the "deadline" setting, that line improves performance even more

    Comment


    • #12
      These benchmarks measure throughput, but not responsiveness under load. Someone only interested in throughput will find the benchmark results helpful. If responsiveness is required, more research would be needed - from their implementations, I'd expect CFQ to better at that, but I have no data to back it up.

      Comment


      • #13
        Originally posted by rohcQaH View Post
        These benchmarks measure throughput, but not responsiveness under load. Someone only interested in throughput will find the benchmark results helpful. If responsiveness is required, more research would be needed - from their implementations, I'd expect CFQ to better at that, but I have no data to back it up.
        No scientific data to back my claim, but with 3 good ol' SATA 5 -year-old HDDs, I've tried cfq, bfq (with and without the bfs) and noop, deadline is by far the most responsive under load.

        Comment


        • #14
          benchmark

          What about some other benchmark. Like destroying SSD, HDD or something.

          Comment


          • #15
            Originally posted by xeekei View Post
            This is interesting, I seem to recall Noop being the fastest on SSDs? I might need to change my Udev rule now.
            deadline always was the better io schedular, at least for ssds.

            Comment


            • #16
              Can you also test BFQ in the future? After all, work is being done right now to integrate it into the kernel, and it's also supposed to replace CFQ when that happens.

              Comment


              • #17
                Would it change something to the results to add the discard option? I think that?s how most people mount their SSDs under Linux?

                Besides it looks like deadline only wins on synthetic benchmarks.

                Comment

                Working...
                X