Announcement

Collapse
No announcement yet.

Linux 4.19 I/O Scheduler SSD Benchmarks With Kyber, BFQ, Deadline, CFQ

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Linux 4.19 I/O Scheduler SSD Benchmarks With Kyber, BFQ, Deadline, CFQ

    Phoronix: Linux 4.19 I/O Scheduler SSD Benchmarks With Kyber, BFQ, Deadline, CFQ

    As it has been a while since last running some Linux I/O scheduler benchmarks, here are some fresh results while using the new Linux 4.19 stable kernel and tests carried out from a 500GB Samsung 860 EVO SATA 3.0 SSD within a 2P EPYC Dell PowerEdge R7425 Linux server...

    http://www.phoronix.com/scan.php?pag...-IO-Schedulers

  • #2
    Great. Did you experience filesystem corruption during the benchmarks? If so, which schedulers yielded corruption?

    Oh wait, you weren't using 4.19.x... I have a test request. This exact same benchmark but on an "unstable" kernel like 4.19.x...

    Comment


    • #3
      I'd be interested in the logic behind schedulers for rotational media making an impact for solid state devices.
      Even deadline is mostly geared towards rotational media with CSCAN.
      By what logic can it be better than noop?
      The disc internal io-scheduler in the SSD being rubbish?
      The "unrandomization" of requests (by creating request queues) thus inferring request latency?

      Sure, you can always trade latency or fairness for bulk sequentials (and vice versa) much like any scheduling, but other than that?

      Edit: The only reason I can think of right now is that SATA is a stupid non-bidi protocol. Ie, you occupy the bus while doing bulk transfers, thereby increasing the importance of proper scheduling. NVMe doesn't have the same design right?
      Last edited by milkylainen; 11-29-2018, 03:27 PM.

      Comment


      • #4
        Originally posted by tildearrow View Post
        Great. Did you experience filesystem corruption during the benchmarks? If so, which schedulers yielded corruption?

        Oh wait, you weren't using 4.19.x... I have a test request. This exact same benchmark but on an "unstable" kernel like 4.19.x...
        What? The 4.19.x series is "stable". Do you mean the 4.20 series release candidates ?

        Comment


        • #5
          Originally posted by bosjc View Post

          What? The 4.19.x series is "stable". Do you mean the 4.20 series release candidates ?
          I call them "unstable" because of the EXT4 corruption issues, and because some people were reporting AMD troubles with 4.19 (yes, I know they are officially called "stable" but it doesn't feel like so).
          Last edited by tildearrow; 11-29-2018, 06:34 PM.

          Comment


          • #6
            Originally posted by tildearrow View Post
            Great. Did you experience filesystem corruption during the benchmarks? If so, which schedulers yielded corruption?

            Oh wait, you weren't using 4.19.x... I have a test request. This exact same benchmark but on an "unstable" kernel like 4.19.x...
            This was on 4.19 but didn't encounter corruption on that box.
            Michael Larabel
            http://www.michaellarabel.com/

            Comment


            • #7
              Wow, deadline is killing. I've been using mq-none on my 860 EVO RAID0 for a couple months and my system has been flying. I'm gonna revisit deadline and see how it translates on a daily basis. Thanks for these benchmarks, Michael.

              As for the kernel, it's also unfortunate because 4.18.20 has now reached EOL, and 4.19.x is buggy in many ways, especially with amdgpus. So I feel there's no "go-to" kernel at the moment with 4.20 being in RC and still early to adopt full-time.

              I hate to do this to GKH but I keep being reminded of his bullish comments when the kernel came out.

              "...things settled down on the code side and it looks like stuff came nicely together to make a solid kernel for everyone to use for a while. And given that this is going to be one of the "Long Term" kernels I end up maintaining for a few years, that's good news for everyone." [Source]

              Comment


              • #8
                Seems that nothing has changed since a few years ago.

                Back then we did some testing on a KVM host, with guests running Apache web server. For every possible combination of host and guest I/O scheduler, we measured the time from boot until 90% of guests were ready to serve the first webpage. (We found that 90% threshold gave us stable results, because you have random hiccups, or fsck suddenly deciding it is time to check the guest filesystem, etc.)

                noop+deadline combo won hands down.

                Comment


                • #9
                  Michael any chance you will have Liquorix kernel (https://liquorix.net/) tested alongside in upcoming benchmark articles? It is really easy to install on top of Ubuntu/Debian and is a drop-in replacement for stock ubuntu kernel.

                  Comment


                  • #10
                    Unless I'm mistaken this test is it does not tell us much about latency, which for many of us is more important than throughput.
                    I think the most important test is this: under heavy load, is the system still responsive enough? And lately that has not been the case on my system using BFQ which was supposed to be the whole point :/

                    Comment

                    Working...
                    X