Announcement

Collapse
No announcement yet.

Linux 4.12 I/O Scheduler Tests With A HDD & SSD

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by xrysf03 View Post
    @torsionbar28:
    >"a storage media will deliver its peak throughput with a single large sequential operation."
    >
    Yup - I've measured this myselfs on a Barracuda 7200.11 a few years ago:


    > "Multi-queue, by definition, does i/o streams in parallel, so by definition, it will deliver markedly lower peak throughput."
    >
    The inclusion of MQ schedulers in this benchmark sounds odd. Uninformed. Multiqueue hardware is rare. Maybe some higher-end NVMe SSD's would support this. Without HW support, the blk-mq stack probably runs against a single HW queue. Makes me wonder how efficient that is.

    [...]
    I noticed that a Linux guest running on a qemu/kvm virtual machine boots up visibly faster when I have blk-mq enabled on the host. The host has no special hardware. The virtual machine's disk image is saved on a normal SATA SSD.

    Comment


    • #22
      Another test with BFQ configured the wrong way, i.e., for maximum responsiveness; at the expense, of course, of throughput ...

      Comment


      • #23
        Originally posted by paolo View Post
        Another test with BFQ configured the wrong way, i.e., for maximum responsiveness; at the expense, of course, of throughput ...
        We've both complained about this a couple of times . I think Michael isn't really into changing the default configuration of the software he's testing, although he's done that before, e.g. for video drivers. It's also possible that he doesn't see these posts, of course.

        Comment


        • #24
          Originally posted by GrayShade View Post

          We've both complained about this a couple of times . I think Michael isn't really into changing the default configuration of the software he's testing, although he's done that before, e.g. for video drivers. It's also possible that he doesn't see these posts, of course.
          How is Michael going about it when he does change configuration? Does he have automated process to make such easy for him or his he doing it manually which takes up his time and discourages him from doing such?

          Comment


          • #25
            Originally posted by polarathene View Post
            How is Michael going about it when he does change configuration? Does he have automated process to make such easy for him or his he doing it manually which takes up his time and discourages him from doing such?
            The BFQ tunable is a file under sysfs, so it's simply writing to that. For a video driver he probably needs to unload the driver and load it again with different parameters or maybe reboot the box.

            Comment


            • #26
              Originally posted by Ropid View Post

              I noticed that a Linux guest running on a qemu/kvm virtual machine boots up visibly faster when I have blk-mq enabled on the host. The host has no special hardware. The virtual machine's disk image is saved on a normal SATA SSD.
              Interesting. Thanks for that note.
              This raises a question on my part, if blk-mq uses a different IO buffering/caching/prefetch approach or memory volume (the standard block and VM have tunable knobs in /proc/sys/vm and elsewhere) or if it possibly disrespects barrier operations :-) i.e. does aggressive seek reordering across barriers, ignores sync() or some such.

              The one thing I've noticed a couple years ago in the standard block+VM layer was, that even in the deadline scheduler, there was a definitive per-transaction timeout on writeback, which under heavier load made the elevator collapse into "first come fists served" after whatever finite timeout value you set in /proc/sys/vm. Once an IO transaction timed out from the dirty cache, it was shoved into a FIFO queue of "transactions that have timed out and need to be written ASAP" and that was the end of any write-combining and reordering on writeback :-)

              Comment


              • #27
                Originally posted by GrayShade View Post

                The BFQ tunable is a file under sysfs, so it's simply writing to that. For a video driver he probably needs to unload the driver and load it again with different parameters or maybe reboot the box.
                Yeah, but if there are similar little tweaks like this, it grows into a list of things to do. Not just referring to BFQ or I/O schedulers and video drivers. Some benchmarks might suit other tweaks for what is tested and in other benchmarks different tweaks again, then you have user requests/suggestions etc. You see similar requests with filesystem tests for example. Or if certain distros have different settings, Michael might want to leave those as is(defaults) or also do a benchmark pass on distros with settings tweaked to try do a comparison with more similar parameters.

                Manually that can be annoying to maintain and apply right? So if some automation was used for common benchmark tunables and distros that barrier gets removed for Michael and he might be more happy to test a wider variety of configurations. No problems with making a mistake applying configs or missing one.

                I've not used PTS yet, I'd assume it doesn't mess with configurations on the OS like that. Images could be built with selected configs and tested with PTS in an automated way though? Packer and Ansible for example might do it, or what was potterings new tool casync? Was meant to be able to do something similar wasn't it?

                Comment


                • #28
                  Originally posted by polarathene View Post
                  Manually that can be annoying to maintain and apply right? So if some automation was used for common benchmark tunables and distros that barrier gets removed for Michael and he might be more happy to test a wider variety of configurations. No problems with making a mistake applying configs or missing one.

                  I'm pretty sure that automatically writing a file isn't out of the reach of the current PTS. The test profiles are just a bunch of shell scripts: https://github.com/phoronix-test-sui...9.0/install.sh .

                  Comment


                  • #29
                    Originally posted by paolo View Post
                    Another test with BFQ configured the wrong way, i.e., for maximum responsiveness; at the expense, of course, of throughput ...
                    Why doesn't BFQ have a sane default that isn't max responsiveness? Micheal is known for testing with default settings.

                    Comment


                    • #30
                      Originally posted by starshipeleven View Post
                      Why doesn't BFQ have a sane default that isn't max responsiveness? Micheal is known for testing with default settings.
                      BFQ is written with interactive usage in mind. Anyone who goes out of their way to apply the BFQ patches and enable it, is probably interested in responsiveness, so it's perfectly sane to bias towards that by default.

                      Comment

                      Working...
                      X