Announcement

Collapse
No announcement yet.

Linux 4.12 I/O Scheduler Benchmarks: BFQ, Kyber, Etc

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by DrYak View Post
    CPU Cycles, IO throughput, etc. are a scarce ressource.
    Resources are unlimited! everyone knows that by now, no?
    -- the web

    seriously though, choosing the right algorithm for the task at hand is more important than people believe...

    Comment


    • #22
      Originally posted by enihcam View Post
      I thought BFQ was for HDD only, right?
      I have a Samsung SSD (850 EVO), and I can tell first hand that my system is useless while copying large files under CFQ (the default). With "deadline" it's good enough and with BFQ way better.

      Comment


      • #23
        Originally posted by enihcam View Post
        I thought BFQ was for HDD only, right?
        It should be nice for SATA(/AHCI) SSDs I think. blq-mk should be enough for NVME SSDs.

        Comment


        • #24
          Originally posted by Serafean View Post

          Resources are unlimited! everyone knows that by now, no?
          -- the web

          seriously though, choosing the right algorithm for the task at hand is more important than people believe...
          What? Bubble sort ain't good enough for ya?

          :P

          Comment


          • #25
            Originally posted by geearf View Post

            blq-mk should be enough for NVME SSDs.
            I am wondering if that applies to a NVMe SSD that is installed on PCIe 2.0, and not 3.0? I recently setup a Clover UEFI Bootloader for my old Intel Xeon X5650 6 core system that runs at 4Ghz under water at all times. It is of course a legacy BIOS system, so I disabled all sata ports, installed Fedora 26 on the GPT NVMe SSD and had me some fun. Fedora 26 runs great as is, but I kept with the defaults of ext4 and CFQ for now. Since I have no other storage devices connected, except for USB storage and NAS, I felt like I might be missing something by sticking to these older protocols. I know I am not yet ready to move on from EXT4, but what about the scheduler? I am torn if I should even be using it still. It seems like I should be using something light, such as Kyber, but again with so many options I just wait until the questions/answers are more clear.

            So, I keep wondering when will ext4 no longer be good enough for super fast solid state? And when will it be time to move on from CFQ? I keep hearing all this talk about responsiveness, and while yes I want everything to just work instant, I also highly rely on data movement to make me feel whole. Watching Windows move data on this NVMe disk to a USB 3.1 device in mere MB/s was disheartening to say the least. I am hopeful that in the Linux world when I am working in my Fedora or Arch installs that I wont get the same sad experiences. I want the best of both worlds, fast enough throughput and super responsiveness. Is that NO scheduler for me, or should I try BFQ or blk-mq? 4.12 is mere days away so I keep wondering. lol

            EDIT: After further research and testing I now realize that "none" was set as the default. I had forgotten to change the "sd*" to the actual nvme device, thus I was getting the default scheduler for any other I/O devices that may be connected. Noobish, but this is the first nvme device I have had so I simply forgot it would change to something like "nvme01" or what ever it is.
            Last edited by SkOrPn; 11 August 2017, 03:00 PM.

            Comment


            • #26
              I'd really like to see BFQ vs Kyber vs CFQ benchmarks for latency / responsiveness. Kyber seems smaller and has good throughput, but I wonder which one is better for responsiveness (regular desktop usage and media).

              Comment


              • #27
                We can measure latency/responsiveness with ping while running some other programs like compile gcc or linux.

                Comment


                • #28
                  System responsiveness does not just matter for the desktop. Servers that run lots of virtual machines, serve network drives, diskless clients, that type of stuff, care a lot about resource sharing. If a single virtual machine can freeze the whole host under heavy disk I/O then that's a serious problem!

                  I have had the problem where large file copies on the server cause diskless clients to drop out and corrupted their filesystems, very bad! Currently looking at the latest schedulers to see if I can mitigate the problem.

                  Comment

                  Working...
                  X