Announcement

Collapse
No announcement yet.

NVMe VFIO Mediated Device Support Being Hacked On For Lower Latency Storage In VMs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • NVMe VFIO Mediated Device Support Being Hacked On For Lower Latency Storage In VMs

    Phoronix: NVMe VFIO Mediated Device Support Being Hacked On For Lower Latency Storage In VMs

    Maxim Levitsky of Red Hat sent out a "request for comments" patch series this week introducing NVMe VFIO media storage device support for the Linux kernel...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Hmm, this looks interesting. With NVMe I have the option to PCIe-passthrough the drive but this works for the whole drive and it precludes its use by the host (not a big concern for my use case at the moment, but still). With KVM I generally prefer to use LVM and create logical volumes for the disk images. Would this let me expose just a single LV residing on an NVMe into the guest as an independent NVMe drive? I took a glance at the announcement and also the USENIX paper it refers (with a broken link, it seems) but I cannot find any details on how the host drive is presented into the guest. Anyone here knows any details?

    Comment


    • #3
      Originally posted by kobblestown View Post
      Hmm, this looks interesting. With NVMe I have the option to PCIe-passthrough the drive but this works for the whole drive and it precludes its use by the host (not a big concern for my use case at the moment, but still). With KVM I generally prefer to use LVM and create logical volumes for the disk images. Would this let me expose just a single LV residing on an NVMe into the guest as an independent NVMe drive? I took a glance at the announcement and also the USENIX paper it refers (with a broken link, it seems) but I cannot find any details on how the host drive is presented into the guest. Anyone here knows any details?
      Hi!
      The paper is at

      I added ',' at the end of the link in the mail by mistake, sorry for the inconvenience.

      Currently I am afraid that my driver only supports passing physical partitions as a whole to the virtual NVMe drive.
      My code reads the partition table of the physical drive and uses that to do lightweight translation of NVMe IO commands
      that the guest sends, straight to an IO queue of the host NVMe drive which is reserved in advance for that guest.

      The idea of my driver is to have as little as possible software intervention in the IO path to reduce latency, so I don't even use the block layer.
      However I can add an ability for the user to specify arbitrary interval map for the translation instead of using partitions, which could be supplied for instance by LLVM userspace tools.

      Comment


      • #4
        Hi Maxim! No worries about the link - it made me search for the paper and I stumbled upon the complete proceedings of the USENIX Annual Technical Conference. Nerdgasm!

        Noq on topic - thanks for engaging with my question. While I very much like the flexibility that LVM provides, it does present a bit of a complication because the logical volumes might not be contiguous. It's probably fine to insist on physical partitions because these are contiguous.

        Cheers!

        Comment


        • #5
          This NVMe VFIO mediated device support would allow virtualized guests to run their unmodified/standard NVMe device drivers, including the Windows drivers, while still allowing the NVMe device to be shared between the host and guest.
          If this means out-of-box support for a Hackintosh VM, I am super excited for it!

          Comment

          Working...
          X