Announcement

Collapse
No announcement yet.

Fedora Developers Are Trying To Figure Out The Best Linux I/O Scheduler

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Fedora Developers Are Trying To Figure Out The Best Linux I/O Scheduler

    Phoronix: Fedora Developers Are Trying To Figure Out The Best Linux I/O Scheduler

    Fedora developers are working on trying to figure out the best default behavior moving forward for their I/O scheduler selection...

    http://www.phoronix.com/scan.php?pag...e-IO-Scheduler

  • #2
    At the end they'll come up with an own implementation made by the systemd guys.

    Comment


    • #3
      There was a debate on the Kernel mailing list about who should actually decide on the default and which it should be. The Kernel developers should at least in theory have an information bonus as they are the ones developing and testing these schedulers in the first place. But appearently there is not a single best solution for all workloads and use cases and I wonder how this could be handled best.
      Last edited by ms178; 12-14-2018, 01:29 PM.

      Comment


      • #4
        You choose that with a udev rule, see: https://wiki.archlinux.org/index.php..._I/O_scheduler

        Comment


        • #5
          It is best to have 2 default schedulers: one for classic hard disks and one for SSD's.

          Comment


          • #6
            Originally posted by Candy View Post
            At the end they'll come up with an own implementation made by the systemd guys.
            It will be great.

            Comment


            • #7
              Or for server/database and desktop workflow.

              Comment


              • #8
                What are the commands for examining the current configuration, and changing the selection?

                Comment


                • #9
                  Originally posted by xorbe View Post
                  What are the commands for examining the current configuration, and changing the selection?
                  Code:
                  dmesg | grep sched
                  io scheduler noop registered
                  io scheduler deadline registered
                  io scheduler cfq registered (default)
                  Shows you the ones available (I don't build them all) and the "default" is the one in use unless overridden.

                  To change i/o schedulers, you do it on the kernel command line with the elevator=schedulername parameter

                  It can also be done per-disk, for example, for /dev/sda

                  Code:
                  echo 'schedulername' > /sys/block/sda/queue/scheduler
                  Where schedulername is one of the available i/o schedulers.

                  P.S. You can query that with:

                  Code:
                  cat /sys/block/sda/queue/scheduler
                  It will return something like:

                  Code:
                  noop deadline (cfq)
                  With the one in brackets being the one in use.
                  Last edited by Grogan; 12-14-2018, 04:50 PM. Reason: Damned bbcode stripping stuff out

                  Comment


                  • #10
                    Originally posted by reavertm View Post
                    Or for server/database and desktop workflow.
                    The thing is not that easy, though...

                    We have...

                    A) one sequential read
                    B) several sequential reads
                    C) one non-sequential read
                    D) many non-sequential reads (that might turn out to be some As and Bs)
                    E) one sequential write (that might invalidate an A, B, C or D)
                    F) several sequential writes
                    G) one non-sequential write
                    H) many non-sequential writes (that might turn out trigger an E)

                    It's all not that easy...

                    Given what we have today in either a) choice and b) tunables given for each scheduler it's a good experience we are "complaining" about...

                    Comment

                    Working...
                    X