Announcement

Collapse
No announcement yet.

Linux 4.17 I/O Scheduler Tests On An NVMe SSD Yield Surprising Results

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • tux9656
    replied
    I use the deadline scheduler for my SSDs and adjust the tunables to make it more SSD friendly.

    echo deadline > /sys/block/sdb/queue/scheduler
    echo 50 > /sys/block/sdb/queue/iosched/read_expire
    echo 500 > /sys/block/sdb/queue/iosched/write_expire
    echo 1 > /sys/block/sdb/queue/iosched/fifo_batch
    echo 8 > /sys/block/sdb/queue/iosched/writes_starved
    echo 0 > /sys/block/sdb/queue/iosched/front_merges
    echo 512 > /sys/block/sdb/queue/read_ahead_kb

    Leave a comment:


  • paolo
    replied
    Originally posted by ermo View Post
    paolo

    How difficult is perceived latency/smoothness to quantify in benchmarking terms?

    Would it make sense to create a set of artificial constraints on the benchmarks, akin to what is done with certain network modules that insert latency and/or bandwidth shaping between endpoints, in order to make it more clear where/amplify the cases when I/O scheduling algorithms exhibit pathological/undesirable behaviour?
    Very interesting point. So far I have not analyzed in depth this point and its psychological implications. In fact, I have addressed these issues indirectly, with the following approach: in the full version of the startup-time test (in the S benchmark suite), results for a given application are compared with the lowest possible start-up time for that application, namely the start-up time when there is no additional I/O in the background. And bfq starts applications in about this ideal minimum-possible time, even if there is additional I/O being served, and regardless of how much this additional I/O is. So, the message is simply: "the system is always as responsive as it could ever be".

    Leave a comment:


  • timofonic
    replied
    Originally posted by ermo View Post
    paolo

    How difficult is perceived latency/smoothness to quantify in benchmarking terms?

    Would it make sense to create a set of artificial constraints on the benchmarks, akin to what is done with certain network modules that insert latency and/or bandwidth shaping between endpoints, in order to make it more clear where/amplify the cases when I/O scheduling algorithms exhibit pathological/undesirable behaviour?
    I'm not an expert at all, but this sounds interesting

    Leave a comment:


  • ermo
    replied
    paolo

    How difficult is perceived latency/smoothness to quantify in benchmarking terms?

    Would it make sense to create a set of artificial constraints on the benchmarks, akin to what is done with certain network modules that insert latency and/or bandwidth shaping between endpoints, in order to make it more clear where/amplify the cases when I/O scheduling algorithms exhibit pathological/undesirable behaviour?

    Leave a comment:


  • paolo
    replied
    Originally posted by andreano View Post
    I also wonder how this translates to less ideal storage devices like SD cards. In my experience, SD cards are much worse than even harddisks when it comes to writing anything. I had to put Firefox on tmpfs for it to not lock up my system all the time.
    Last results on flash storage here: https://schd.ws/hosted_files/elciotn...at-Valente.pdf

    Leave a comment:


  • paolo
    replied
    Originally posted by Zan Lynx View Post

    And if you (enihcam) want to make permanent changes you can use udev rules. Here's what a server of mine has:

    Code:
    # cat /etc/udev/rules.d/iosched.rules
    ACTION=="add|change", SUBSYSTEM=="block", KERNEL=="sd?", ATTR{queue/scheduler}="bfq"
    ACTION=="add|change", SUBSYSTEM=="block", KERNEL=="nvme?n?", ATTR{queue/scheduler}="kyber"
    You would want ATTR{queue/iosched/low_latency}="1", obviously. And put it after setting the scheduler to "bfq".
    BTW, low_latency is currently set by default

    Leave a comment:


  • andreano
    replied
    I also wonder how this translates to less ideal storage devices like SD cards. In my experience, SD cards are much worse than even harddisks when it comes to writing anything. I had to put Firefox on tmpfs for it to not lock up my system all the time.
    Last edited by andreano; 30 May 2018, 03:18 PM.

    Leave a comment:


  • AsuMagic
    replied
    Now I'm curious how bfq with low-latency behaves on a very slow storage device (i.e. USB flash drives which have very poor random access performance) rather than an oversized SSD for app startup performance. I've been dealing with linux setups on USB flash drivers and under IO pressure, they went very unresponsive.

    Leave a comment:


  • Zan Lynx
    replied
    Originally posted by Michael View Post

    /sys/block/DEVICE/queue/iosched/low_latency
    And if you (enihcam) want to make permanent changes you can use udev rules. Here's what a server of mine has:

    Code:
    # cat /etc/udev/rules.d/iosched.rules
    ACTION=="add|change", SUBSYSTEM=="block", KERNEL=="sd?", ATTR{queue/scheduler}="bfq"
    ACTION=="add|change", SUBSYSTEM=="block", KERNEL=="nvme?n?", ATTR{queue/scheduler}="kyber"
    You would want ATTR{queue/iosched/low_latency}="1", obviously. And put it after setting the scheduler to "bfq".

    Leave a comment:


  • Michael
    replied
    Originally posted by enihcam View Post
    how to enable the low_latency property of BFQ? module parameter? sysfs? sysctl? where?
    /sys/block/DEVICE/queue/iosched/low_latency

    Leave a comment:

Working...
X