Announcement

Collapse
No announcement yet.

Fedora Switching To The BFQ I/O Scheduler For Better Responsiveness & Throughput

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by Rallos Zek View Post
    BFQ is still after many years still prone to slow performance and system lockups. its even worse on HDDs, Stick with Deadline or Noop for your health.
    You'll need some evidence for this. I don't understand how BFQ is mainline but no one is reporting lockups. Maybe it's because BFQ is stable?

    Comment


    • #22
      This comes after a systemd proposal to switch to BFQ as the default scheduler. But systemd developers decided to leave this as a downstream decision.
      How generous of them.

      Comment


      • #23
        I think that now, however, the scheduler none is a better solution ... but you can switch it on the fly = we give such a possibility, and in addition for Hdd and another for SSD / Flash we can give https://www.netext73.pl/

        Comment


        • #24
          Originally posted by AndyChow View Post

          https://wiki.archlinux.org/index.php..._I/O_scheduler

          Just make a rule in udev, and specify the device has to have a name of the "mmcblk" type.
          Problem is that in /sys/block/mmcblk1/queue/scheduler there is just mq-deadline, bfq and none. bfq-mq is nowhere to be seen. I guess this udev rule can't enable scheduler that doesn't available for certain device.
          Last edited by RussianNeuroMancer; 08-23-2019, 06:01 AM.

          Comment


          • #25
            Originally posted by Templar82 View Post
            How generous of them.
            I don't know if you've read the relevant systemd github pull request with comments from the systemd and kernel I/O maintainers, but it seemed quite the reasonable and rational discussion to me?

            Comment


            • #26
              Originally posted by RussianNeuroMancer View Post

              Problem is that in /sys/block/mmcblk1/queue/scheduler there is just mq-deadline, bfq and none. bfq-mq is nowhere to be seen. I guess this udev rule can't enable scheduled that doesn't available for certain device.
              Depending on your kernel version, you might need to enable the multiqueue schedulers via the scsi_mod.use_blk_mq=1 kernel command line argument.

              Comment


              • #27
                Originally posted by ermo View Post
                Depending on your kernel version, you might need to enable the multiqueue schedulers via the scsi_mod.use_blk_mq=1 kernel command line argument.
                Kernel version is 5.1.19, and I did enabled multiqueue schedulers this way.

                Comment


                • #28
                  Originally posted by RussianNeuroMancer View Post

                  Problem is that in /sys/block/mmcblk1/queue/scheduler there is just mq-deadline, bfq and none. bfq-mq is nowhere to be seen. I guess this udev rule can't enable scheduler that doesn't available for certain device.
                  bfq in this case is bfq-mq so you're good.
                  The fact that you have mq-deadline proves that you're on mq already and since you cannot be on multiple queues and single queue at the same time, you know you're good.

                  Comment


                  • #29
                    Originally posted by paolo View Post

                    I have proposed to leave NVMe out for the moment, as a precaution. The idea is to add also NVMe if everything goes well with this preliminary step.
                    Have you tried BFQ with an Intel 900P/905P SSD? Do you know if there is any benefit from using BFQ on that kind of device? Or since the latency is so low already, is there no point?

                    Comment


                    • #30
                      Originally posted by polarathene View Post

                      Have you tried BFQ with an Intel 900P/905P SSD? Do you know if there is any benefit from using BFQ on that kind of device? Or since the latency is so low already, is there no point?
                      Full results with a Samsung SSD 970 PRO here:
                      https://algo.ing.unimo.it/people/paolo/BFQ/results.php

                      Comment

                      Working...
                      X