Announcement

Collapse
No announcement yet.

BFQ & CFQ Improvements Land In Linux 4.14

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • BFQ & CFQ Improvements Land In Linux 4.14

    Phoronix: BFQ & CFQ Improvements Land In Linux 4.14

    Linus Torvalds has pulled in the block layer updates for the Linux 4.14 kernel merge window...

    http://www.phoronix.com/scan.php?pag...-Block-Updates

  • #2
    When are we getting Ming Lei's patches for blk_mq?

    Comment


    • #3
      Can the BFQ/CFQ update be clarified if it's the legacy(is that the right term?) or blk-mq I/O schedulers? I know BFQ is only mainlined as blk-mq scheduler, not sure if the non-blk-mq scheduler is still getting developed on.

      Comment


      • #4
        Originally posted by polarathene View Post
        Can the BFQ/CFQ update be clarified if it's the legacy(is that the right term?) or blk-mq I/O schedulers? I know BFQ is only mainlined as blk-mq scheduler, not sure if the non-blk-mq scheduler is still getting developed on.
        yes bfq for single queue is actively developed and its called bfq-sq, manjaro kernels include it

        Comment


        • #5
          Has anyone seen a website/analysis that definitively concludes which of these schedulers are best for NVME drives? Right now I just use deadline, but only for lack of looking for something better. Or maybe with an NVME drive, the scheduler doesn't really matter anymore??

          Comment


          • #6
            Originally posted by sa666666 View Post
            Has anyone seen a website/analysis that definitively concludes which of these schedulers are best for NVME drives? Right now I just use deadline, but only for lack of looking for something better. Or maybe with an NVME drive, the scheduler doesn't really matter anymore??
            if your NVME is fast enough (there is a very wide spread in performance and price for NVME, from "like a USB stick" to "blazing fast"); for very high speed/low latency NVME the scheduler needs to get out of the way more than anything... e.g. do basic grouping and fairness, but any cycle spent is a cost you don't really win back. While for slower storage, cycles spend on the CPU optimizing can improve behavior of the NVME device enough and be a net win.

            Comment


            • #7
              Originally posted by arjan_intel View Post
              if your NVME is fast enough (there is a very wide spread in performance and price for NVME, from "like a USB stick" to "blazing fast"); for very high speed/low latency NVME the scheduler needs to get out of the way more than anything... e.g. do basic grouping and fairness, but any cycle spent is a cost you don't really win back. While for slower storage, cycles spend on the CPU optimizing can improve behavior of the NVME device enough and be a net win.
              Yes, I should have mentioned the device I have; it's a Samsung 950 Pro. So pretty near the top of the line currently. So do you suggest just sticking with deadline? Note that I haven't been noticing any issues; I just want to make sure I'm using the best possible option for my hardware.

              Comment


              • #8
                With a NVME, wouldn't no scheduler be better?

                Comment


                • #9
                  Originally posted by arjan_intel View Post

                  if your NVME is fast enough (there is a very wide spread in performance and price for NVME, from "like a USB stick" to "blazing fast"); for very high speed/low latency NVME the scheduler needs to get out of the way more than anything... e.g. do basic grouping and fairness, but any cycle spent is a cost you don't really win back. While for slower storage, cycles spend on the CPU optimizing can improve behavior of the NVME device enough and be a net win.
                  I've been told there are some issues even with the very fastest NVMe when you're running a database with a lot of sync-writes going on. It is supposed to cause latency issues on other reads and writes. The schedulers can help with that by giving more space to the non-sync reads and writes in the queue. My informant said that's why Facebook uses Kyber.

                  Maybe Phoronix could come up with a benchmark to show what happens if you run a database and a web server on the same system, and put it under heavy transaction load while requesting random small files through HTTP in a bigger than RAM dataset.

                  Comment


                  • #10
                    My main use is C++ software development. All write-heavy directories already use tmpfs (/tmp, /var/log, browser cache, etc).

                    Comment

                    Working...
                    X