The Next Linux Kernel Will Bring More Drivers Converted To Use BLK-MQ I/O
More Linux storage drivers have been converted to the "blk-mq" interfaces for the multi-queue block I/O queuing mechanism for the 4.20~5.0 kernel cycle.
Blk-mq is capable of delivering much better performance with modern storage devices -- namely NVMe PCI Express SSDs but also SCSI drives. This code that's been part of the Linux kernel the past few years allows mapping I/O to multiple queues and distributing the tasks across multiple CPU threads, thus scaling better with today's multi-core servers, while also supporting multiple hardware queues of capable devices.
The key device drivers like NVMe, VirtIO, scsi_mq, and others have already supported the multi-queue block I/O code for quite some time (going back to late Linux 3.x releases) while for the Linux 4.20~5.0 release a number of the smaller drivers are being ported over.
Jens Axboe and Omar Sandoval -- both working for Facebook -- have been converting many of the remaining drivers to using blk-mq. Those latest drivers being ported include sx8, z2ram, gdrom, floppy, ataflop, amiflop, swim3, swim, mtd_blkdevs, xsysace, paride, ps3disk, um, and aoe drivers.
Yes, that's even the original floppy disk driver dating back to Linus Torvalds' code in 1991 now supports the blk-mq interfaces. Within that floppy driver code is a funny original comment from Torvalds still during the kernel's early days: "This file is certainly a mess. I've tried my best to get it working, but I don't like programming floppies, and I have only one anyway."
Converting these mostly older drivers to use blk-mq is generally dozens of lines of code for each driver. The latest activity can be found in linux-block's for-next ahead of the Linux 4.20~5.0 kernel cycle.
With blk-mq being quite fit these days and the remaining drivers getting converted to blk-mq, it will be interesting to see if the legacy I/O interfaces get removed from an upcoming Linux kernel release. This next kernel release is also (re)enabling run-time power management under blk-mq.
Blk-mq is capable of delivering much better performance with modern storage devices -- namely NVMe PCI Express SSDs but also SCSI drives. This code that's been part of the Linux kernel the past few years allows mapping I/O to multiple queues and distributing the tasks across multiple CPU threads, thus scaling better with today's multi-core servers, while also supporting multiple hardware queues of capable devices.
The key device drivers like NVMe, VirtIO, scsi_mq, and others have already supported the multi-queue block I/O code for quite some time (going back to late Linux 3.x releases) while for the Linux 4.20~5.0 release a number of the smaller drivers are being ported over.
Jens Axboe and Omar Sandoval -- both working for Facebook -- have been converting many of the remaining drivers to using blk-mq. Those latest drivers being ported include sx8, z2ram, gdrom, floppy, ataflop, amiflop, swim3, swim, mtd_blkdevs, xsysace, paride, ps3disk, um, and aoe drivers.
Yes, that's even the original floppy disk driver dating back to Linus Torvalds' code in 1991 now supports the blk-mq interfaces. Within that floppy driver code is a funny original comment from Torvalds still during the kernel's early days: "This file is certainly a mess. I've tried my best to get it working, but I don't like programming floppies, and I have only one anyway."
Converting these mostly older drivers to use blk-mq is generally dozens of lines of code for each driver. The latest activity can be found in linux-block's for-next ahead of the Linux 4.20~5.0 kernel cycle.
With blk-mq being quite fit these days and the remaining drivers getting converted to blk-mq, it will be interesting to see if the legacy I/O interfaces get removed from an upcoming Linux kernel release. This next kernel release is also (re)enabling run-time power management under blk-mq.
10 Comments