Linux 4.17 I/O Scheduler Tests On An NVMe SSD Yield Surprising Results

Written by Michael Larabel in Storage on 30 May 2018 at 10:46 AM EDT. Page 1 of 4. 19 Comments.

With the Linux 4.17 kernel soon to be released, I've been running some fresh file-system and I/O scheduler tests -- among other benchmarks -- of this late stage kernel code. For your viewing pleasure today are tests of a high performance Intel Optane 900p NVMe SSD with different I/O scheduler options available with Linux 4.17.

From this high-performance solid-state drive the schedulers tested were none. mq-deadline, BFQ, BFQ low_latency, and Kyber.

You may be asking why these I/O scheduler tests were done with a speedy SSD when most Linux distributions default to using "none" for NVMe-based storage. Well, it was out of curiosity and to see the impact. Since on some Linux distributions like Intel's performance-oriented Clear Linux, they prefer using the newer Facebook-backed Kyber scheduler over "none" for NVMe storage.

So this article today is looking at the I/O scheduler performance on Linux 4.17 when using an AMD EPYC 7601 server with the Intel Optane 900p 280GB SSD. An additional article in the days ahead is looking at these scheduler results for Linux 4.17 but when testing with a conventional SATA 3.0 SSD and consumer-grade HDD for additional data points where the I/O scheduler can really matter a great deal.

These Linux I/O benchmarks were carried out in a fully-automated and reproducible manner using the open-source Phoronix Test Suite benchmarking software.


Related Articles