Announcement

Collapse
No announcement yet.

Linux 4.17 I/O Scheduler Tests On An NVMe SSD Yield Surprising Results

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Michael
    replied
    Originally posted by Shnatsel View Post
    What happened to CFQ? Last time I checked that was the default for HDDs and one of the most common schedulers out there. It's very strange to see it omitted from the comparison.
    Was just an oversight of forgetting to add it to the test queue, will be in the SSD/HDD tests coming up.

    Leave a comment:


  • Michael
    replied
    Originally posted by pegasus View Post
    And the queue depths were?
    Defaults

    Leave a comment:


  • Zan Lynx
    replied
    Originally posted by Shnatsel View Post
    What happened to CFQ? Last time I checked that was the default for HDDs and one of the most common schedulers out there. It's very strange to see it omitted from the comparison.
    CFQ is essentially irrelevant to NVMe drives. CFQ isn't one of the multi-queue schedulers and NVMe is optimized to work best with multiple IO queues. One queue per CPU, generally. That means that a CPU does not have to send its IO to another CPU for queuing, it simply submits it to the NVMe device.

    BFQ is the multi-queue replacement for CFQ and is good for almost all devices. Kyber keeps NVMe responsive under heavy IO load by keeping the queue depth managable so high priority IO doesn't have to wait too long. Important since NVMe queues can, in theory (there's usually hardware limits), grow to 65535 in length under the "none" scheduler.

    Leave a comment:


  • darkbasic
    replied
    Originally posted by edwaleni View Post
    What was the block size used on the Optane? If you published it, I didn't see it, (sorry)
    If you are interested I did several benchmarks of Optane with different sector sizes: http://www.linuxsystems.it/2018/05/o...t4-benchmarks/

    Leave a comment:


  • edwaleni
    replied
    What was the block size used on the Optane? If you published it, I didn't see it, (sorry)

    Leave a comment:


  • Shnatsel
    replied
    What happened to CFQ? Last time I checked that was the default for HDDs and one of the most common schedulers out there. It's very strange to see it omitted from the comparison.

    Leave a comment:


  • enihcam
    replied
    how to enable the low_latency property of BFQ? module parameter? sysfs? sysctl? where?

    Leave a comment:


  • pegasus
    replied
    And the queue depths were?

    Leave a comment:


  • shmerl
    replied
    Intel Optane is a really niche NVMe though. May be tests with recent Samsung 970 Evo or WD Black NVMe would be more relevant for most users.

    Leave a comment:


  • Linux 4.17 I/O Scheduler Tests On An NVMe SSD Yield Surprising Results

    Phoronix: Linux 4.17 I/O Scheduler Tests On An NVMe SSD Yield Surprising Results

    With the Linux 4.17 kernel soon to be released, I've been running some fresh file-system and I/O scheduler tests -- among other benchmarks -- of this late stage kernel code. For your viewing pleasure today are tests of a high performance Intel Optane 900p NVMe SSD with different I/O scheduler options available with Linux 4.17.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite
Working...
X