Announcement

Collapse
No announcement yet.

The Next Linux Kernel Will Bring More Drivers Converted To Use BLK-MQ I/O

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • The Next Linux Kernel Will Bring More Drivers Converted To Use BLK-MQ I/O

    Phoronix: The Next Linux Kernel Will Bring More Drivers Converted To Use BLK-MQ I/O

    More Linux storage drivers have been converted to the "blk-mq" interfaces for the multi-queue block I/O queuing mechanism for the 4.20~5.0 kernel cycle...

    http://www.phoronix.com/scan.php?pag...BLK-MQ-Drivers

  • #2
    So Michael, we're going to see a 10-page featured article involving the floppy performance, right?


    Though now that I think of it... that would actually make for a pretty funny April 1st article.

    Comment


    • #3
      Will this improve performance of SATA mechanical disks(HDDs)? Also, in response to schmidtbag , I would say, if it improves floppy performance, that's all well and good, maybe someone somewhere has a 486 or Pentium II and still uses floppies for something. Of course, going off-topic, that would actually be something I would be interested in, building a Linux distro light enough to run and be usable on a Pentium II. Most kernel features would need to be disabled or put in modules to reduce RAM and CPU usage. Probably desktop environment would need to be Openbox, with very few extras, maybe dillo for web browsing.
      Entire distro would probably not comprise more than about 120 packages for a fully functioning system. Maybe 96mb RAM usage...

      Comment


      • #4
        The I/o was told as a problem several months ago. Has been done something?

        Comment


        • #5
          What is current default for emmc storage?

          Comment


          • #6
            Looks like its first step for preparing for make BFQ scheduler by default, because it's only available for MQ devices. And nowadays for using BFQ needs option CONFIG_SCSI_MQ_DEFAULT=y or scsi_mod.use_blk_mq=1 in boot parameters.

            Comment


            • #7
              Originally posted by mzs.112000 View Post
              Will this improve performance of SATA mechanical disks(HDDs)?
              No. blk-mq starts to be relevant for performance when you have multi-queue devices capable of more than a million IOPS. For comparison, a mechanical SATA HD is capable of about 100 IOPS.

              Though on large multi-socket systems you might see a slight decrease in the kernel time, due to less lock contention. But it won't make the device itself go faster.

              Also, in response to schmidtbag , I would say, if it improves floppy performance
              Maybe you missed the smiley in the previous post?

              that's all well and good, maybe someone somewhere has a 486 or Pentium II and still uses floppies for something.
              The good thing is that blk-mq shouldn't per se add any additional overhead for low-end hardware.

              Look, the explanation for switching floppy to blk-mq is surely precisely what Michael already speculated, that they are preparing to rip out the old single-queue block interface.


              Comment


              • #8
                Originally posted by jabl View Post
                No. blk-mq starts to be relevant for performance when you have multi-queue devices capable of more than a million IOPS. For comparison, a mechanical SATA HD is capable of about 100 IOPS.
                Why? I'm rather unconvinced. I know nothing about blk-mq, but I know that the concept of letting the drive decide the ordering, as in native command queuing, is to accommodate bottlenecks in the drive, not the CPU or elsewhere.
                Last edited by andreano; 10-17-2018, 02:26 PM.

                Comment


                • #9
                  Originally posted by andreano View Post
                  I'm rather unconvinced. I know nothing
                  So you admit you're clueless, yet you don't believe that the experts who have implemented and tested blk-mq might know better?

                  Words fail me...

                  about blk-mq, but I know that the concept of letting the drive decide the ordering, as in native command queuing, is to accommodate bottlenecks in the drive, not the CPU or elsewhere.
                  tl;dr: When you're pushing 1M+ IOPS, the kernel software architecture of the old single-queue block layer was limiting performance, which was why multiqueue devices appeared on the market, and blk-mq was developed. In contrast to the old single-queue architecture, block-mq has multiple software queues; one per core, or was it one per NUMA domain, thus limiting lock contention for a single queue which was limiting performance in the old single-queue design. Further, these multiple queues are then capable of feeding to multiple HW queues for devices that support it, further improving performance compared to pushing everything through a single HW queue.

                  For the non-tl;dr version, see http://kernel.dk/blk-mq.pdf

                  Comment


                  • #10
                    Originally posted by jabl View Post
                    So you admit you're clueless
                    Ahem. I'm stating the limits of my knowledge, since I don't know who I am disputing. I think that's good forum etiquette.

                    Thanks to that, your TLDR was able to clear up exactly what the article wasn't. Think about that!
                    Last edited by andreano; 10-17-2018, 04:53 PM.

                    Comment

                    Working...
                    X