Announcement

Collapse
No announcement yet.

QEMU 1.5 Supports VGA Passthrough, Better USB 3.0

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • phoronix
    started a topic QEMU 1.5 Supports VGA Passthrough, Better USB 3.0

    QEMU 1.5 Supports VGA Passthrough, Better USB 3.0

    Phoronix: QEMU 1.5 Supports VGA Passthrough, Better USB 3.0

    Just three months after the exciting QEMU 1.4 release, QEMU 1.5 is now available with many exciting and new features for those using this open-source software in a virtualized world. There's the VFIO VGA pass-through support, USB 3.0 improvements, and much more...

    http://www.phoronix.com/vr.php?view=MTM3NTk

  • kobblestown
    replied
    More specifically, I would like to get an Supermicro X9SCL-F main board. It has IPMI 2.0 compatible BMC module with its own graphics controller. It doesn't make use of the CPU-integrated GPU. Then pair it with a Xeon E3 CPU with an integrated GPU. Then I would like to run VMs on it that make use of the integrated GPU. I don't even know whether this is possible. Maybe when the mainboard doesn't support the integrated GPU it cannot be used at all. It should be able to render to a buffer - I don't need it to output a video signal, just to generate the images which can then be handled by the hypervisor. But I don't know if that's possible both in terms of the hardware being capable to do it and the hypervisor being able to exploit it. I haven't been able to find information on such use case that far.

    Help me Obi-Wan Kenobi, you're my only hope

    Leave a comment:


  • kobblestown
    replied
    Originally posted by schmidtbag View Post
    Well THAT you should be able to do. Set up a VNC using a display on the discrete GPU. If you find this to be difficult, there are ways to force a GPU to display something without a monitor.
    1. VNC is insufficient. I want sth with synchronized audio and USB redirection. Something along the lines of SPICE. I want to connect to the VM, not to some server inside it. Because I might need to run various OSes.
    2. I really doubt that VNC can remote accelerated 3D graphics. I think VNC implements a X11 display entirely in software and eschews any acceleration. I'd like to be proven wrong though. In any case, it doesn't work very well for remote Windows desktops.

    Actually, I'm indifferent to whether the VM gets a dedicated GPU or whether the GPU is virtualized but the host GPU is used to accelerate the guest display. If course, the former will offer better performance but the latter has better flexibility. For instance, the possibility to run several VMs with a single GPU without requiring VT-d. Sure, there will be a performance hit but I mostly need this for desktop effects, so it should be fine.

    Leave a comment:


  • schmidtbag
    replied
    Originally posted by kobblestown View Post
    That would be great. I would really like to have a setup where I have a server hidden in the closet that runs a virtual machine for my desktop and have full GPU acceleration while being accessed remotely. One can dream...
    Well THAT you should be able to do. Set up a VNC using a display on the discrete GPU. If you find this to be difficult, there are ways to force a GPU to display something without a monitor.

    Leave a comment:


  • kobblestown
    replied
    Originally posted by schmidtbag View Post
    I'm not aware of being able to use the rendering power of the discrete GPU with the virtual display, but I'd like to be proven wrong.
    That would be great. I would really like to have a setup where I have a server hidden in the closet that runs a virtual machine for my desktop and have full GPU acceleration while being accessed remotely. One can dream...

    Leave a comment:


  • schmidtbag
    replied
    Originally posted by GreatEmerald View Post
    Hmm, I did a quick test in VirtualBox (which also has PCI passthrough, though it's probably not meant to be used as VGA passthrough just yet). The VM did detect my card correctly, although couldn't start it saying that there was no monitor attached. Which is fair enough, because it wasn't attached (I only have one monitor here), but even if it was, it probably wouldn't change a whole lot, since it still wouldn't be outputting to the other card.

    That's the confusing part for me. How is it supposed to draw things? Output the entire VM window to the dedicated card and be viewable when something is plugged in there, or be something like Bumblebee and just use the dedicated card for processing, and the integrated one for displaying things? Do I need the NVIDIA module to be loaded on the host or not?

    EDIT: Looks like qemu figures out that it has to output things to the card it owns, and the host card is shown a black screen. At least according to this, which is another nice guide on how to do it, and is more recent:
    https://bbs.archlinux.org/viewtopic.php?id=162768
    It wouldn't work anyway even if you did have a monitor attached - virtualbox has no GART support, which is supposedly ridiculously complicated to passthrough.

    AFAIK, a virtual display in qemu is optional, but many OSes allow you to use more than 1 GPU to render screens, even if only 1 is accelerated. With the virtual display on, it would basically be like a dual-monitor setup. If you were to virtualize Windows with a discrete GPU, you could still disable the virtual GPU in device manager, and set the discrete GPU to be the primary. When you passthrough a GPU, that GPU is, in a way, "exiled" from the host system. That being said, in the guests's perspective, it isn't virtual, and therefore can and must be used as a regular GPU.

    To me, the most ideal purpose of GPU passthrough is multi-seat. I'm not aware of being able to use the rendering power of the discrete GPU with the virtual display, but I'd like to be proven wrong.

    Leave a comment:


  • kobblestown
    replied
    Originally posted by gilboa View Post
    True. At least as far as I know, large page support isn't required in-order to get a second GPU attached to a VM.
    I wonder if that prevents the host kernel from using large pages. I would have though that's particularly good fit for virtualization. I think the current Linux kernel can use large pages automatically, but maybe this prevents it. Does anyone know if that would prevent large pages only for the virtual machines or it does it for software running on the host as well? Sure, it's just a performance optimization but still...

    Leave a comment:


  • gilboa
    replied
    Originally posted by kobblestown View Post
    However (and I quite from Intel? Xeon? Processor E5-1600/E5-2600/E5-4600 Product Families Datasheet):

    The processor supports the following Intel VT Processor Extensions features:
    ? Large Intel VT-d Pages
    ? Adds 2 MB and 1 GB page sizes to Intel VT-d implementations
    ? Matches current support for Extended Page Tables (EPT)
    ? Ability to share CPU's EPT page-table (with super-pages) with Intel VT-d
    ? Benefits:
    ?? Less memory foot-print for I/O page-tables when using super-pages
    ?? Potential for improved performance - Due to shorter page-walks, allows
    hardware optimization for IOTLB
    ? Transition latency reductions expected to improve virtualization performance
    without the need for VMM enabling. This reduces the VMM overheads further and
    increase virtualization performance.

    These are not supported by the E3 family. So not all VT-d's are created equal...
    True. At least as far as I know, large page support isn't required in-order to get a second GPU attached to a VM.

    P.S. I'm typing this on a E3-1245 running on a Gigabyte board and at least according to my Linux kernel, this machine fully supports VT-d.

    - Gilboa

    Leave a comment:


  • kobblestown
    replied
    Originally posted by gilboa View Post
    All the Xeon setups I have (E3's, 55xx's, 56xx's and E5's) have full VT-d support - at least all the ones using Intel boards.
    Keep in mind that only physically tested it on a 5680 machine.
    However (and I quite from Intel? Xeon? Processor E5-1600/E5-2600/E5-4600 Product Families Datasheet):

    The processor supports the following Intel VT Processor Extensions features:
    ? Large Intel VT-d Pages
    ? Adds 2 MB and 1 GB page sizes to Intel VT-d implementations
    ? Matches current support for Extended Page Tables (EPT)
    ? Ability to share CPU's EPT page-table (with super-pages) with Intel VT-d
    ? Benefits:
    ?? Less memory foot-print for I/O page-tables when using super-pages
    ?? Potential for improved performance - Due to shorter page-walks, allows
    hardware optimization for IOTLB
    ? Transition latency reductions expected to improve virtualization performance
    without the need for VMM enabling. This reduces the VMM overheads further and
    increase virtualization performance.

    These are not supported by the E3 family. So not all VT-d's are created equal...

    Leave a comment:


  • gilboa
    replied
    Originally posted by kobblestown View Post
    I have to concede here. On closer inspection VT-d is more common that I though. But the later posts show that it's not so simple. I still maintain that it's very hard to pick a CPU-MB combo and be reasonably sure in advance that it will work. And I wouldn't trust it on a non-Xeon setup. Even on Xeon I wonder if there's any difference between the E3, E5 and E7 lines WRT VT-d feature set.
    All the Xeon setups I have (E3's, 55xx's, 56xx's and E5's) have full VT-d support - at least all the ones using Intel boards.
    Keep in mind that only physically tested it on a 5680 machine.

    - Gilboa

    Leave a comment:

Working...
X