Announcement

Collapse
No announcement yet.

QEMU 1.5 Supports VGA Passthrough, Better USB 3.0

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by GreatEmerald View Post
    Blindly? Again, you didn't provide any reason for that to be the case. And I don't have such a system (the one Core 2 I do have doesn't have VT-d to begin with). What philip550c says makes sense, though, in that some motherboards have buggy firmware that doesn't actually enable the feature. But it doesn't mean that there are different versions of VT-d, some of which are not IOMMU.
    I, as well as others, have provided a reason. Just because your CPU supports IOMMU, it doesn't mean the chipset or motherboard does. Just because your motherboard supports it, it doesn't mean the CPU does. Think about it like this - if you have a chipset with support for 3 PCI-e 16x lanes but you have a motherboard with only 1 slot, does that mean you can do crossfire/SLi? The answer is no, you can't. IOMMU is the same way.

    Note that on most boards with IOMMU support, it is a BIOS option. If your i5 BIOS does not explicitly say it has an option to enable/disable IOMMU or VT-d, you can't do GPU passthrough.

    Comment


    • #32
      That's exactly what I was saying. Yes, both the CPU and the motherboard has to support VT-d, and it has to be enabled in the firmware. And the firmware must not be broken. That is definitely true. But if these conditions are satisfied, then there shouldn't be any further problems with it.

      Comment


      • #33
        Hmm, I did a quick test in VirtualBox (which also has PCI passthrough, though it's probably not meant to be used as VGA passthrough just yet). The VM did detect my card correctly, although couldn't start it saying that there was no monitor attached. Which is fair enough, because it wasn't attached (I only have one monitor here), but even if it was, it probably wouldn't change a whole lot, since it still wouldn't be outputting to the other card.

        That's the confusing part for me. How is it supposed to draw things? Output the entire VM window to the dedicated card and be viewable when something is plugged in there, or be something like Bumblebee and just use the dedicated card for processing, and the integrated one for displaying things? Do I need the NVIDIA module to be loaded on the host or not?

        EDIT: Looks like qemu figures out that it has to output things to the card it owns, and the host card is shown a black screen. At least according to this, which is another nice guide on how to do it, and is more recent:
        Last edited by GreatEmerald; 21 May 2013, 05:31 PM.

        Comment


        • #34
          Originally posted by GreatEmerald View Post
          Hmm, I did a quick test in VirtualBox (which also has PCI passthrough, though it's probably not meant to be used as VGA passthrough just yet). The VM did detect my card correctly, although couldn't start it saying that there was no monitor attached. Which is fair enough, because it wasn't attached (I only have one monitor here), but even if it was, it probably wouldn't change a whole lot, since it still wouldn't be outputting to the other card.

          That's the confusing part for me. How is it supposed to draw things? Output the entire VM window to the dedicated card and be viewable when something is plugged in there, or be something like Bumblebee and just use the dedicated card for processing, and the integrated one for displaying things? Do I need the NVIDIA module to be loaded on the host or not?

          EDIT: Looks like qemu figures out that it has to output things to the card it owns, and the host card is shown a black screen. At least according to this, which is another nice guide on how to do it, and is more recent:
          https://bbs.archlinux.org/viewtopic.php?id=162768
          The passed-through GPU's output is available when you plug in a monitor to one of its output ports so there is no need for explicit support from the host or guest OSes. In fact, if you don't have a secondary monitor to plug in to the passed-through GPU, it is difficult to get any output from the VM.

          This kind of setup works best for desktop computers where each GPU has at least 1 dedicated output, but it doesn't lean itself well to laptop. I have a Clevo P150EM laptop with i7 3720QM and AMD 7970M. VGA passthrough worked fine and I was able to run high-end demos in the Windows VM, but I had to access the Windows VM via remote desktop (Splashtop in this case) so as you can already guess: latency is an issue for resolutions higher than 720p.

          I've been thinking about it a lot, to the point of doing some hardware hackery to rewire the AMD GPU's output to one of the outputs currently wired to Intel GPU, but it's really risky.

          Comment


          • #35
            Originally posted by kobblestown View Post
            I have to concede here. On closer inspection VT-d is more common that I though. But the later posts show that it's not so simple. I still maintain that it's very hard to pick a CPU-MB combo and be reasonably sure in advance that it will work. And I wouldn't trust it on a non-Xeon setup. Even on Xeon I wonder if there's any difference between the E3, E5 and E7 lines WRT VT-d feature set.
            All the Xeon setups I have (E3's, 55xx's, 56xx's and E5's) have full VT-d support - at least all the ones using Intel boards.
            Keep in mind that only physically tested it on a 5680 machine.

            - Gilboa
            oVirt-HV1: Intel S2600C0, 2xE5-2658V2, 128GB, 8x2TB, 4x480GB SSD, GTX1080 (to-VM), Dell U3219Q, U2415, U2412M.
            oVirt-HV2: Intel S2400GP2, 2xE5-2448L, 120GB, 8x2TB, 4x480GB SSD, GTX730 (to-VM).
            oVirt-HV3: Gigabyte B85M-HD3, E3-1245V3, 32GB, 4x1TB, 2x480GB SSD, GTX980 (to-VM).
            Devel-2: Asus H110M-K, i5-6500, 16GB, 3x1TB + 128GB-SSD, F33.

            Comment


            • #36
              Originally posted by gilboa View Post
              All the Xeon setups I have (E3's, 55xx's, 56xx's and E5's) have full VT-d support - at least all the ones using Intel boards.
              Keep in mind that only physically tested it on a 5680 machine.
              However (and I quite from Intel? Xeon? Processor E5-1600/E5-2600/E5-4600 Product Families Datasheet):

              The processor supports the following Intel VT Processor Extensions features:
              ? Large Intel VT-d Pages
              ? Adds 2 MB and 1 GB page sizes to Intel VT-d implementations
              ? Matches current support for Extended Page Tables (EPT)
              ? Ability to share CPU's EPT page-table (with super-pages) with Intel VT-d
              ? Benefits:
              ?? Less memory foot-print for I/O page-tables when using super-pages
              ?? Potential for improved performance - Due to shorter page-walks, allows
              hardware optimization for IOTLB
              ? Transition latency reductions expected to improve virtualization performance
              without the need for VMM enabling. This reduces the VMM overheads further and
              increase virtualization performance.

              These are not supported by the E3 family. So not all VT-d's are created equal...

              Comment


              • #37
                Originally posted by kobblestown View Post
                However (and I quite from Intel? Xeon? Processor E5-1600/E5-2600/E5-4600 Product Families Datasheet):

                The processor supports the following Intel VT Processor Extensions features:
                ? Large Intel VT-d Pages
                ? Adds 2 MB and 1 GB page sizes to Intel VT-d implementations
                ? Matches current support for Extended Page Tables (EPT)
                ? Ability to share CPU's EPT page-table (with super-pages) with Intel VT-d
                ? Benefits:
                ?? Less memory foot-print for I/O page-tables when using super-pages
                ?? Potential for improved performance - Due to shorter page-walks, allows
                hardware optimization for IOTLB
                ? Transition latency reductions expected to improve virtualization performance
                without the need for VMM enabling. This reduces the VMM overheads further and
                increase virtualization performance.

                These are not supported by the E3 family. So not all VT-d's are created equal...
                True. At least as far as I know, large page support isn't required in-order to get a second GPU attached to a VM.

                P.S. I'm typing this on a E3-1245 running on a Gigabyte board and at least according to my Linux kernel, this machine fully supports VT-d.

                - Gilboa
                oVirt-HV1: Intel S2600C0, 2xE5-2658V2, 128GB, 8x2TB, 4x480GB SSD, GTX1080 (to-VM), Dell U3219Q, U2415, U2412M.
                oVirt-HV2: Intel S2400GP2, 2xE5-2448L, 120GB, 8x2TB, 4x480GB SSD, GTX730 (to-VM).
                oVirt-HV3: Gigabyte B85M-HD3, E3-1245V3, 32GB, 4x1TB, 2x480GB SSD, GTX980 (to-VM).
                Devel-2: Asus H110M-K, i5-6500, 16GB, 3x1TB + 128GB-SSD, F33.

                Comment


                • #38
                  Originally posted by gilboa View Post
                  True. At least as far as I know, large page support isn't required in-order to get a second GPU attached to a VM.
                  I wonder if that prevents the host kernel from using large pages. I would have though that's particularly good fit for virtualization. I think the current Linux kernel can use large pages automatically, but maybe this prevents it. Does anyone know if that would prevent large pages only for the virtual machines or it does it for software running on the host as well? Sure, it's just a performance optimization but still...

                  Comment


                  • #39
                    Originally posted by GreatEmerald View Post
                    Hmm, I did a quick test in VirtualBox (which also has PCI passthrough, though it's probably not meant to be used as VGA passthrough just yet). The VM did detect my card correctly, although couldn't start it saying that there was no monitor attached. Which is fair enough, because it wasn't attached (I only have one monitor here), but even if it was, it probably wouldn't change a whole lot, since it still wouldn't be outputting to the other card.

                    That's the confusing part for me. How is it supposed to draw things? Output the entire VM window to the dedicated card and be viewable when something is plugged in there, or be something like Bumblebee and just use the dedicated card for processing, and the integrated one for displaying things? Do I need the NVIDIA module to be loaded on the host or not?

                    EDIT: Looks like qemu figures out that it has to output things to the card it owns, and the host card is shown a black screen. At least according to this, which is another nice guide on how to do it, and is more recent:
                    https://bbs.archlinux.org/viewtopic.php?id=162768
                    It wouldn't work anyway even if you did have a monitor attached - virtualbox has no GART support, which is supposedly ridiculously complicated to passthrough.

                    AFAIK, a virtual display in qemu is optional, but many OSes allow you to use more than 1 GPU to render screens, even if only 1 is accelerated. With the virtual display on, it would basically be like a dual-monitor setup. If you were to virtualize Windows with a discrete GPU, you could still disable the virtual GPU in device manager, and set the discrete GPU to be the primary. When you passthrough a GPU, that GPU is, in a way, "exiled" from the host system. That being said, in the guests's perspective, it isn't virtual, and therefore can and must be used as a regular GPU.

                    To me, the most ideal purpose of GPU passthrough is multi-seat. I'm not aware of being able to use the rendering power of the discrete GPU with the virtual display, but I'd like to be proven wrong.

                    Comment


                    • #40
                      Originally posted by schmidtbag View Post
                      I'm not aware of being able to use the rendering power of the discrete GPU with the virtual display, but I'd like to be proven wrong.
                      That would be great. I would really like to have a setup where I have a server hidden in the closet that runs a virtual machine for my desktop and have full GPU acceleration while being accessed remotely. One can dream...

                      Comment

                      Working...
                      X