Announcement

Collapse
No announcement yet.

NVIDIA Now Allows GeForce GPU Pass-Through For Windows VMs On Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by MadWatch View Post
    What's the point of the passthrough then? Wasn't it possible already to dedicate one GPU to a virtual machine?
    That's exactly the point. You take the GPU and give it to VM. You have to have 2 GPUs no matter AMD or Nvidia.
    SR-IOV allows to share GPU, but it's considered a premium technology for servers/data centers

    Comment


    • #32
      Originally posted by mppix View Post

      Single GPU makes only sense if the host/hypervisor is headless today. For the rest we need to wait for GPUs with proper virtualization support such as SR-IOV
      Not only. In some cases it probably can replace need for dual booting. Compared to dual booting it has some advantages like faster switching between operating systems, easier maintenance of secondary OS (you don't have to partition your real disk because you keep it on virtual disk) and still you can use host to some extent (e.g. by ssh). As I said it's like dual booting without need for actual rebooting.

      Comment


      • #33
        Originally posted by Alexmitter View Post
        No, it was blocked by the driver on the VM. Passthrough was a enterprise GPU only feature on nvidia.
        And people wonder why Linus gave them the middle finger.

        Comment


        • #34
          Originally posted by Schmellow View Post
          That's exactly the point. You take the GPU and give it to VM. You have to have 2 GPUs no matter AMD or Nvidia.
          SR-IOV allows to share GPU, but it's considered a premium technology for servers/data centers
          You don't have to have 2 GPUs. In fact, I find the whole concept with 2 GPUs ridiculous, you need 2 GPUs, two displays (or KVM switch, maybe you can use different inputs on the display, depending on the monitor used ofc.), multiple inputs, it really only makes sense for software development, where you want to do it in the VM while having host working. If you take the most common usage of VMs passthrough (Windows guest for gaming), it really makes 0 sense to use 2 GPUs or to even have host "active" (in the sense of DE, processes etc.), the ideal situation in such scenario would be to detach the primary GPU, attach it to the VM, kill host session (free up resources) and reverse the process when done with VM (gaming).

          As already mentioned in this thread, it have multiple benefits over dual booting, to avoid repeating it, look up in this thread above.

          Comment


          • #35
            I've been running a Windows 10 KVM on Ubuntu 18.04 host with an GTX 1650 SUPER pased-through using VFIO for about a year without any issues, decoding 5 1080p 25fps video camera streams in Blue Iris (without Windows reboots sometimes for a month or more, rebooting only to install updates).
            The setup isn't really than hard. I use an hand-tuned (no libivirt) Q35 EFI machine with an SMBIOS passthrough and a masked VM.
            The recent Windows drivers from Nvidia do have some stability issues with my server's old custom-built 5.0 kernel (kernel panics on host every 3-4 days of runtime), let's hope upgrading to Ubuntu's 5.4 HWE kernel will help.
            If not, since it's now an officially supported configuration, I can ask for help from Nvidia.
            You will need a second GPU (iGPU will work) unless your host to be headless, I simply use the iGPU built into my server's i5 8600.

            Comment


            • #36
              Originally posted by ddscentral View Post
              I've been running a Windows 10 KVM on Ubuntu 18.04 host with an GTX 1650 SUPER pased-through using VFIO for about a year without any issues, decoding 5 1080p 25fps video camera streams in Blue Iris (without Windows reboots sometimes for a month or more, rebooting only to install updates).
              The setup isn't really than hard. I use an hand-tuned (no libivirt) Q35 EFI machine with an SMBIOS passthrough and a masked VM.
              The recent Windows drivers from Nvidia do have some stability issues with my server's old custom-built 5.0 kernel (kernel panics on host every 3-4 days of runtime), let's hope upgrading to Ubuntu's 5.4 HWE kernel will help.
              If not, since it's now an officially supported configuration, I can ask for help from Nvidia.
              You will need a second GPU (iGPU will work) unless your host to be headless, I simply use the iGPU built into my server's i5 8600.
              True it is not that hard, but you should not need to hand-tune your VM nor mask it..
              Hopefully this solves this to some degree.

              Comment


              • #37
                Originally posted by oleid View Post
                Just wondering : does anybody know how virtio-gpu is working on windows these days?
                Windows drivers are essentially a research project. It is also not really the 'best' approach.
                SR-IOV would be but who knows what stars need to align before that is supported on a consumer dGPU.

                Comment


                • #38
                  As someone who has done PCIe passthrough, I have to say: you're too late, Nvidia. People have been asking for non-Quadros to be supported for a decade. Not sure why it's happening now.

                  Comment


                  • #39
                    Originally posted by Zeioth View Post
                    This would be huge if there's an easy way to do it. Like, just open your virtual machine and play. Do someone else know more about this? Will it be necessary to have 2 GPU?
                    Only if you also have SR-IOV. Otherwise you need 2 GPUs or better simply use Wine.

                    Comment


                    • #40
                      Originally posted by ATrigger View Post

                      Well, you can have integrated GPU in your CPU for Host and 1 GPU for VM. It's been like this for ages
                      That makes you choose between a weak APU or a piece of Meltdown crap. Neither are optimal.

                      Comment

                      Working...
                      X