Announcement

Collapse
No announcement yet.

VirtIO-IOMMU Comes To x86 With Linux 5.14

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • VirtIO-IOMMU Comes To x86 With Linux 5.14

    Phoronix: VirtIO-IOMMU Comes To x86 With Linux 5.14

    The VirtIO-IOMMU driver now works on x86/x86_64 hardware with the Linux 5.14 kernel...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Typos:

    Originally posted by phoronix View Post
    for describing the relation between virtio-iommmu and endpoints..

    Comment


    • #3
      Can someone explain what this is for like I'm five? Is this about faking IOMMU groups and passing through devices on motherboards that don't split up PCIe lanes the way you want?

      Comment


      • #4
        Originally posted by MaxToTheMax View Post
        Can someone explain what this is for like I'm five? Is this about faking IOMMU groups and passing through devices on motherboards that don't split up PCIe lanes the way you want?
        I'm puzzled by this too. So far, I understand that this simply abstracts the physical IOMMU and creates a virtual one not too dissimilar to virtual networking but for pcie.

        I am not certain how this will be useful in the future but it could be useful for VM pcie management/hotplug or maybe assigning a single device to multiple VM.
        (I'm in part guessing here..)

        Comment


        • #5
          Because someone can't be bothered to google and only quotes the patches verbatim, here's the reason the IOMMU on APUs with stony-ridge GPUs was disabled:



          Basically, the APU's built-in GPU sees too much latency when using system memory as DRAM through the IOMMU, causing flickering. Thus, the kernel devs shut the IOMMU off. It appears to be turned off in windows too, but it's not like windows cares about security or virtualization all that much anyway.

          If you don't care about the flickering (eg, you have a dGPU or are headless) this would let you turn the IOMMU back on.

          Comment


          • #6
            IOMMU simply maps device addresses to physical addresses in main memory.

            A virt-IOMMU I would assume allows you to map many virtual device types and associated addresses to main memory.

            If you have different CPU architectures that map devices uniquely in main memory, you should be able to have them co-exist on the same single host virtually regardless of what arch the host is.

            Comment


            • #7
              IOMMUs are used to defend against thunderstrike-like attacks by restricting where devices can write to in memory. If you're using a VM for isolation from potential adversaries, and you're handing it thunderbolt ports, it might be worthwhile providing it with the means to attempt to defend itself. May even be useful in the face of GPU trickery or other attacks which leverage hardware provided to the VM that has DMA capability.

              This ALSO has the likely advantage of making virtualized hardware play nicer with nested VMs.

              Comment


              • #8
                Originally posted by edwaleni View Post
                IOMMU simply maps device addresses to physical addresses in main memory.

                A virt-IOMMU I would assume allows you to map many virtual device types and associated addresses to main memory.

                If you have different CPU architectures that map devices uniquely in main memory, you should be able to have them co-exist on the same single host virtually regardless of what arch the host is.
                CPU arch doesn't matter. PCIe is PCIe, whether you're on ARM, x86, or POWER.

                Comment


                • #9
                  Originally posted by mppix View Post

                  I'm puzzled by this too. So far, I understand that this simply abstracts the physical IOMMU and creates a virtual one not too dissimilar to virtual networking but for pcie.

                  I am not certain how this will be useful in the future but it could be useful for VM pcie management/hotplug or maybe assigning a single device to multiple VM.
                  (I'm in part guessing here..)
                  Originally posted by MaxToTheMax View Post
                  Can someone explain what this is for like I'm five? Is this about faking IOMMU groups and passing through devices on motherboards that don't split up PCIe lanes the way you want?
                  No. not quite. In essence it is a proxy. It pretty much exists to add IOMMU support in guests. this is great for security in the guest, as well as nested virtualization. this is NOT an ACS override alternative to my knowledge.

                  EDIT: I am unsure. but this MAY be able to provide additional security for the host when using ACS overrides. but this is conjecture. but hopefully so.
                  Last edited by Quackdoc; 11 July 2021, 11:13 PM.

                  Comment


                  • #10
                    Originally posted by Quackdoc View Post



                    No. not quite. In essence it is a proxy. It pretty much exists to add IOMMU support in guests. this is great for security in the guest, as well as nested virtualization. this is NOT an ACS override alternative to my knowledge.

                    EDIT: I am unsure. but this MAY be able to provide additional security for the host when using ACS overrides. but this is conjecture. but hopefully so.

                    Great explanation thanks!
                    Last edited by MaxToTheMax; 11 July 2021, 11:26 PM.

                    Comment

                    Working...
                    X