Announcement

Collapse
No announcement yet.

NVIDIA Now Allows GeForce GPU Pass-Through For Windows VMs On Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by V1tol View Post
    You would get "error 43" in Windows because NVIDIA explicitly restricted such configuration. Of course workarounds and hacks exist. Also I think you couldnt do any work on such configuration because of licensing and all that legal stuff.

    Tecnically yes, your GPU seems to be supported. I did that on mine laptop with RTX2060 using hacks. Display is connected to iGPU, Windows used NVIDIA. Now it should be much easier to do.
    oleid , MadWatch , until now, needed to mask your VM such that the Nvidia driver in the Windows VM would not detect a VM. This creates all sorts of compromises.
    This is pretty much why every GPU passthrough tutorial starts with 'if you have Nvidia, see also the other 128 steps.. and hope that certain stars are aligned'.

    Comment


    • #12
      Originally posted by bcellin View Post
      I have a laptop with d-GPU geforce 960m. I don't know if this helps in my case.
      would I be able to set my host's GPU to my intel i-GPU and my virtual windows to my d-GPU? (there would be 2 GPUs in this case)
      This works on a deskop and likely would on a laptop too: mask the dGPU such that Linux does not detect it; then you can pass it through to a VM..
      Added benefit: the desktop runs only on the iGPU, i.e. no Wayland problems..

      Comment


      • #13
        Originally posted by MadWatch View Post
        What's the point of the passthrough then? Wasn't it possible already to dedicate one GPU to a virtual machine?
        No, it was blocked by the driver on the VM. Passthrough was a enterprise GPU only feature on nvidia.

        Comment


        • #14
          Originally posted by Zeioth View Post
          This would be huge if there's an easy way to do it. Like, just open your virtual machine and play. Do someone else know more about this? Will it be necessary to have 2 GPU?
          Define necessary..
          Host and guest cannot easily share the GPU. This would require SR-IOV, which is supported only by a handful of old AMD GPUs. There are some workarounds for Linux host+linux guest for 3D acceleration.
          However, if you are fine with a headless host/hypervisor (that you can access via ssh..), you can certainly passthrough your only GPU..

          We are still years from getting properly virtualized GPUs similar to CPUs..

          Comment


          • #15
            I have been doing this with NVIDIA for years. This is not new functionality.

            Comment


            • #16
              Originally posted by Alexmitter View Post

              No, it was blocked by the driver on the VM. Passthrough was a enterprise GPU only feature on nvidia.
              It was extremely easy to work around the code 43 error. All you had to do was stick a

              Code:
              <kvm> 
              <hidden state='on'/>
              </kvm>
              in your KVM config.

              Comment


              • #17
                So they want to sell us 2 GPUs while there's 0 available. Nice business move.

                Comment


                • #18
                  Originally posted by eydee View Post
                  So they want to sell us 2 GPUs while there's 0 available. Nice business move.
                  Well, you can have integrated GPU in your CPU for Host and 1 GPU for VM. It's been like this for ages

                  Comment


                  • #19
                    Originally posted by Zeioth View Post
                    This would be huge if there's an easy way to do it. Like, just open your virtual machine and play. Do someone else know more about this? Will it be necessary to have 2 GPU?
                    You don't actually need to have 2 GPUs, with some setups you can do it with one GPU. Problem is you can't simply share one GPU between 2 or more operating systems. There is GPU virtualization but it's basically limited to professional cards for AMD and Nvidia. As far I know only Intel supports this on consumer integrated graphics but these cards are not very good match for gaming. So if you want to do it with customer AMD or Nvidia card you need to unbind it from your host before VM start and bind again when VM stops. Some cards were (or even still are) problematic with such scenario (check AMD reset bug or patching Nvidia ROM). There is also main disadvantage compared to dual GPU setup - you can't easily use both operating systems at the same time. When VM starts your host loses control over your GPU and only VM will be able to display things. It's something like dual booting without rebooting.

                    Comment


                    • #20
                      Host and guest cannot easily share the GPU. This would require SR-IOV, which is supported only by a handful of old AMD GPUs. There are some workarounds for Linux host+linux guest for 3D acceleration. However, if you are fine with a headless host/hypervisor (that you can access via ssh..), you can certainly passthrough your only GPU..

                      We are still years from getting properly virtualized GPUs similar to CPUs..
                      I'm not sure this is the current state of things. It's true that some older AMD cards (firepro-types) are the ones where the accessibility (second-hand $$$), drivers, and licensing allign in a way that's workable outside a corporate environment (e.g. comparable nVidia vGPU requires a licensing server), but actually the SR-IOV functionality is built into recent consumer-grade GPUs from both vendors, with support for at least two virtualized GPUs. IIRC, Turing and Ampere both have it (maybe not in lower-end SKUs?), and Vega, Radeon 7, and 6800/xt/6900xt have it but 5700/xt/6700/xt don't.

                      Wendell of Level1 Techs did a video a few months back about how all the stars were lining up for SR-IOV in the consumer space.

                      My personal prediction is that both teams are building up a hardware support base before enabling it, and that mainstream driver support come online when Microsoft enables GUI apps in WSL2 (probably late this year), at least on the Windows-host side of things.

                      Comment

                      Working...
                      X