Page 2 of 2 FirstFirst 12
Results 11 to 17 of 17

Thread: Linux 3.16: Deadline I/O Scheduler Generally Leads With A SSD

  1. #11
    Join Date
    Feb 2008
    Location
    Santiago, Chile
    Posts
    253

    Default

    Quote Originally Posted by xeekei View Post
    It seems so. It's possible to write a udev rule that detects if it's a rotational medium or solid state, and set the scheduler accordingly. I have Noop for SSDs and CFQ for HDDs. I think it will choose Noop for USB thumbsticks and the like too.
    Code:
    [root@hydragiros ~]# cat /etc/udev/rules.d/60-io_schedulers.rules 
    # Set deadline scheduler for non-rotating disks
    ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="deadline"
    ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0", ATTR{queue/iosched/fifo_batch}="1"
    # Set cfq scheduler for rotating disks
    ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="1", ATTR{queue/scheduler}="cfq"
    That's better. Pay attention to the line coming after the "deadline" setting, that line improves performance even more

  2. #12
    Join Date
    Nov 2008
    Posts
    776

    Default

    These benchmarks measure throughput, but not responsiveness under load. Someone only interested in throughput will find the benchmark results helpful. If responsiveness is required, more research would be needed - from their implementations, I'd expect CFQ to better at that, but I have no data to back it up.

  3. #13
    Join Date
    Jun 2014
    Posts
    15

    Default

    Quote Originally Posted by rohcQaH View Post
    These benchmarks measure throughput, but not responsiveness under load. Someone only interested in throughput will find the benchmark results helpful. If responsiveness is required, more research would be needed - from their implementations, I'd expect CFQ to better at that, but I have no data to back it up.
    No scientific data to back my claim, but with 3 good ol' SATA 5 -year-old HDDs, I've tried cfq, bfq (with and without the bfs) and noop, deadline is by far the most responsive under load.

  4. #14
    Join Date
    Jun 2014
    Posts
    1

    Default benchmark

    What about some other benchmark. Like destroying SSD, HDD or something.

  5. #15
    Join Date
    Oct 2012
    Posts
    293

    Default

    Quote Originally Posted by xeekei View Post
    This is interesting, I seem to recall Noop being the fastest on SSDs? I might need to change my Udev rule now.
    deadline always was the better io schedular, at least for ssds.

  6. #16
    Join Date
    Apr 2014
    Posts
    8

    Default

    Can you also test BFQ in the future? After all, work is being done right now to integrate it into the kernel, and it's also supposed to replace CFQ when that happens.

  7. #17
    Join Date
    Jan 2011
    Posts
    394

    Default

    Would it change something to the results to add the discard option? I think that’s how most people mount their SSDs under Linux…

    Besides it looks like deadline only wins on synthetic benchmarks.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •