I use the deadline scheduler for my SSDs and adjust the tunables to make it more SSD friendly.
echo deadline > /sys/block/sdb/queue/scheduler
echo 50 > /sys/block/sdb/queue/iosched/read_expire
echo 500 > /sys/block/sdb/queue/iosched/write_expire
echo 1 > /sys/block/sdb/queue/iosched/fifo_batch
echo 8 > /sys/block/sdb/queue/iosched/writes_starved
echo 0 > /sys/block/sdb/queue/iosched/front_merges
echo 512 > /sys/block/sdb/queue/read_ahead_kb
Announcement
Collapse
No announcement yet.
Linux 4.17 I/O Scheduler Tests On An NVMe SSD Yield Surprising Results
Collapse
X
-
Originally posted by ermo View Postpaolo
How difficult is perceived latency/smoothness to quantify in benchmarking terms?
Would it make sense to create a set of artificial constraints on the benchmarks, akin to what is done with certain network modules that insert latency and/or bandwidth shaping between endpoints, in order to make it more clear where/amplify the cases when I/O scheduling algorithms exhibit pathological/undesirable behaviour?
- Likes 2
Leave a comment:
-
Originally posted by ermo View Postpaolo
How difficult is perceived latency/smoothness to quantify in benchmarking terms?
Would it make sense to create a set of artificial constraints on the benchmarks, akin to what is done with certain network modules that insert latency and/or bandwidth shaping between endpoints, in order to make it more clear where/amplify the cases when I/O scheduling algorithms exhibit pathological/undesirable behaviour?
Leave a comment:
-
paolo
How difficult is perceived latency/smoothness to quantify in benchmarking terms?
Would it make sense to create a set of artificial constraints on the benchmarks, akin to what is done with certain network modules that insert latency and/or bandwidth shaping between endpoints, in order to make it more clear where/amplify the cases when I/O scheduling algorithms exhibit pathological/undesirable behaviour?
- Likes 2
Leave a comment:
-
Originally posted by andreano View PostI also wonder how this translates to less ideal storage devices like SD cards. In my experience, SD cards are much worse than even harddisks when it comes to writing anything. I had to put Firefox on tmpfs for it to not lock up my system all the time.
- Likes 5
Leave a comment:
-
Originally posted by Zan Lynx View Post
And if you (enihcam) want to make permanent changes you can use udev rules. Here's what a server of mine has:
Code:# cat /etc/udev/rules.d/iosched.rules ACTION=="add|change", SUBSYSTEM=="block", KERNEL=="sd?", ATTR{queue/scheduler}="bfq" ACTION=="add|change", SUBSYSTEM=="block", KERNEL=="nvme?n?", ATTR{queue/scheduler}="kyber"
- Likes 6
Leave a comment:
-
I also wonder how this translates to less ideal storage devices like SD cards. In my experience, SD cards are much worse than even harddisks when it comes to writing anything. I had to put Firefox on tmpfs for it to not lock up my system all the time.Last edited by andreano; 30 May 2018, 03:18 PM.
- Likes 2
Leave a comment:
-
Now I'm curious how bfq with low-latency behaves on a very slow storage device (i.e. USB flash drives which have very poor random access performance) rather than an oversized SSD for app startup performance. I've been dealing with linux setups on USB flash drivers and under IO pressure, they went very unresponsive.
- Likes 2
Leave a comment:
-
Originally posted by Michael View Post
/sys/block/DEVICE/queue/iosched/low_latency
Code:# cat /etc/udev/rules.d/iosched.rules ACTION=="add|change", SUBSYSTEM=="block", KERNEL=="sd?", ATTR{queue/scheduler}="bfq" ACTION=="add|change", SUBSYSTEM=="block", KERNEL=="nvme?n?", ATTR{queue/scheduler}="kyber"
- Likes 5
Leave a comment:
Leave a comment: