Announcement

Collapse
No announcement yet.

Fedora Switching To The BFQ I/O Scheduler For Better Responsiveness & Throughput

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by hax0r View Post
    Now we need MuQSS as a default CPU scheduler in desktop distros, preferably after Valve takes a good look at it, CFS is for server loads, even Huawei went away with CFS and reported boost in UI responsivity and latency.
    MuQSS lacks support for cgroups, and the author has no intention in supporting them(I think because it conflicted with the scheduler?). So I'd doubt it'd become default, even for desktop distros. Needs to get accepted into mainline as an alternative option before you can really talk default anyhow, no many distro maintainers would want the burden to maintain patches for it, I think prior to BFQ becoming available in mainline it wasn't common to see, only Manjaro provided a patched kernel with it enabled by default?

    Besides mainline adoption, isn't it best suited with the ck patches which iirc involved certain compiled kernel settings? So if you switched to another scheduler when not suitable it wouldn't be a smooth process?

    Comment


    • #32
      Originally posted by paolo View Post

      Full results with a Samsung SSD 970 PRO here:
      https://algo.ing.unimo.it/people/paolo/BFQ/results.php
      ? I think you misunderstood me.

      The Samsung device is a different beast, it's great top tier product for it's area of storage. The Intel Optane 900P or 905P is not a traditional NVMe SSD product, it's known for it's low latency, Samsung has a different product to compete with it, max throughput isn't the important metric for these alternative storage products.

      Perhaps these results/benchmarks will indicate that better?

      https://www.tomshardware.com/news/in...ssd,38987.html

      Phoronix also had some coverage of their own(but I still suggest that toms hardware one):
      https://www.phoronix.com/scan.php?pa...ane-900p&num=3

      Yes, you don't see the Optane 900P in there because it ran too fast! Carrying out all the queries was too quick and the program returned before the Phoronix Test Suite could get an automated accurate measurement, it was less than one second.

      Comment


      • #33
        Originally posted by polarathene View Post

        ? I think you misunderstood me.

        The Samsung device is a different beast, it's great top tier product for it's area of storage. The Intel Optane 900P or 905P is not a traditional NVMe SSD product, it's known for it's low latency, Samsung has a different product to compete with it, max throughput isn't the important metric for these alternative storage products.

        Perhaps these results/benchmarks will indicate that better?

        https://www.tomshardware.com/news/in...ssd,38987.html

        Phoronix also had some coverage of their own(but I still suggest that toms hardware one):
        https://www.phoronix.com/scan.php?pa...ane-900p&num=3
        No, sorry, I haven't bought that too yet. If you have one, would you be willing to run a batch of latency tests on it? It boils down to running the Phoronix version of the start-up time benchmark, for each scheduler, or to executing
        git clone https://github.com/Algodev-github/S && S/run_multiple_benchmarks/test_responsiveness.sh
        (the latter option will automatically try all available scheduler)

        Comment


        • #34
          Originally posted by RussianNeuroMancer View Post

          Kernel version is 5.1.19, and I did enabled multiqueue schedulers this way.
          Sorry for the late response. As another user has pointed out, bfq is bfq-mq. Kernel > 5.0 removed single-queue anyway, and your kernel parameters are probably not necessary (although if it's working now, no need to mess with it).

          mq-deadline, bfq and none are all mq. none is like a mq-noop. It doesn't do re-orderuing, low overhead, but allows merges. And in my experience the best scheduler there is, for my use anyway.

          Comment


          • #35
            Originally posted by paolo View Post

            No, sorry, I haven't bought that too yet. If you have one, would you be willing to run a batch of latency tests on it? It boils down to running the Phoronix version of the start-up time benchmark, for each scheduler, or to executing
            git clone https://github.com/Algodev-github/S && S/run_multiple_benchmarks/test_responsiveness.sh
            (the latter option will automatically try all available scheduler)
            I don't have one myself yet. Michael does, perhaps he could do a new disk I/O scheduler article, I think it's been a while since the last one.

            Comment


            • #36
              Originally posted by geearf View Post
              bfq in this case is bfq-mq so you're good.
              Originally posted by AndyChow View Post
              As another user has pointed out, bfq is bfq-mq.
              Thanks for clarification! I guess mq-deadline name mess things up a bit, since there is no single queue deadline anymore, right?

              Originally posted by AndyChow View Post
              Kernel > 5.0 removed single-queue anyway, and your kernel parameters are probably not necessary (although if it's working now, no need to mess with it).
              bfq kernel module does not load unless I added these parameters (but I guess not all three was necessary, maybe one or two).

              Comment


              • #37
                Originally posted by AndyChow View Post

                Sorry for the late response. As another user has pointed out, bfq is bfq-mq. Kernel > 5.0 removed single-queue anyway, and your kernel parameters are probably not necessary (although if it's working now, no need to mess with it).

                mq-deadline, bfq and none are all mq. none is like a mq-noop. It doesn't do re-orderuing, low overhead, but allows merges. And in my experience the best scheduler there is, for my use anyway.
                Are you on an NVMe drive? And -- out of nothing but curiousity -- what is your usage scenario?

                Comment


                • #38
                  Originally posted by paolo View Post
                  If you have one, would you be willing to run a batch of latency tests on it? It boils down to running the Phoronix version of the start-up time benchmark, for each scheduler, or to executing
                  git clone https://github.com/Algodev-github/S && S/run_multiple_benchmarks/test_responsiveness.sh
                  (the latter option will automatically try all available scheduler)
                  i have one, but your test is scary. do you have test which can be run without sudo?

                  Comment


                  • #39
                    Originally posted by pal666 View Post
                    i have one, but your test is scary. do you have test which can be run without sudo?
                    What is scary about it or is this a mindless "sudo bad" response? Do you honestly believe that you can change IO schedulers without having elevated privileges?

                    Comment


                    • #40
                      Originally posted by Space Heater View Post
                      What is scary about it or is this a mindless "sudo bad" response?
                      this is "sudo to run too complex to quickly verify script bad" response
                      Originally posted by Space Heater View Post
                      Do you honestly believe that you can change IO schedulers without having elevated privileges?
                      i honestly believe that i can change io schedulers and then run script without sudo. or run script which calls sudo only to change io schedulers

                      Comment

                      Working...
                      X