Announcement

Collapse
No announcement yet.

Intel Working On A VirtIO DMA-BUF Driver For Multi-GPUs, Virtualized Environments

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel Working On A VirtIO DMA-BUF Driver For Multi-GPUs, Virtualized Environments

    Phoronix: Intel Working On A VirtIO DMA-BUF Driver For Multi-GPUs, Virtualized Environments

    Intel engineers have been working on "Vdmabuf" as a VirtIO-based DMA-BUF driver for the Linux kernel. This driver is intended for their growing multi-GPU use-cases and also in cases of GPU virtualization where wanting to transfer contents seamlessly to the host for display purposes...

    http://www.phoronix.com/scan.php?pag...BUF-Driver-RFC

  • #2
    Wendell from Level1Techs might screem of joy after hearing this news. This work might bring us closer to play Windows-games on Linux in a VM without a meaningful performance penalty.

    Comment


    • #3
      AMD hurry to catch up because if the next generation of GPUs won't support virtualization I'll switch to the new Intel discrete cards, whatever the performance will be.
      ## VGA ##
      AMD: X1950XTX, HD3870, HD5870
      Intel: GMA45, HD3000 (Core i5 2500K)

      Comment


      • #4
        Originally posted by darkbasic View Post
        AMD hurry to catch up because if the next generation of GPUs won't support virtualization I'll switch to the new Intel discrete cards, whatever the performance will be.
        There were some nice yt reviews of DG1 lately, I found this to be useful, albeit in german:
        https://www.youtube.com/watch?v=aAJXQXOCw14

        Basic conclusion: Needs latest gen intel cpus and corresponding bios updates, display outputs soldered to the board are unusable, 3D doesn't work at all, OpenCL doesn't work at all. 2D was done by the igpu in the cpu. External developers told him the same findings. But except from that, everything is well.

        I really think if intel ever gets out a worthwhile product, that will be some years from now. RDNA3 can be expected 2022.

        Comment


        • #5
          While it would allow access of the guest to multiple GPU's, the fact it has to create a virtual PCI connection implemented as a buffer makes me wonder just how performant it is. Can the buffer be managed for size depending on the GPU involved or will it be the same for all? What if I want to run an OpenCL process in a container against that virt-gpu doing nothing off to the side? Does the dmabuffer need to be modified?

          Comment


          • #6
            Originally posted by darkbasic View Post
            AMD hurry to catch up because if the next generation of GPUs won't support virtualization I'll switch to the new Intel discrete cards, whatever the performance will be.
            right... i already wrote to bridgman about this multible times.
            i was attacked by social engineering to motivate me to install a game and this game was a trojan horse.
            AMD act like Virtualization features are only for professional users in the data center and the PRO hardware market...

            but this is wrong... anyone even gamers need to protect themself from trojan horses inside closed source games by useing virtualization VM to protect the main system from dangerous fraudulent source code inside closed source games.

            Comment


            • #7
              I wonder how Windows is handling it. On linux it seems this vGPU is only available to us via Intel drivers or non-consumer AMD/Nvidia GPUs, yet Windows seems to have been working on support for generic vGPU?

              https://devblogs.microsoft.com/direc...x-heart-linux/

              Over the last few Windows releases, we have been busy developing client GPU virtualization technology. This technology is integrated into WDDM (Windows Display Driver Model) and all WDDMv2.5 or later drivers have native support for GPU virtualization. This technology is referred to as WDDM GPU Paravirtualization, or GPU-PV for short.

              To bring support for GPU acceleration to WSL 2, WDDMv2.9 will expand the reach of GPU-PV to Linux guests. This is achieved through a new Linux kernel driver that leverages the GPU-PV protocol to expose a GPU to user mode Linux. The projected abstraction of the GPU follows closely the WDDM GPU abstraction model, allowing API and drivers built against that abstraction to be easily ported for use in a Linux environment.
              Applications running inside of the Linux environment have the same access to the GPU as native applications on Windows. There is no partitioning of resources between Linux and Windows or limit imposed on Linux applications. The sharing is completely dynamic based on who needs what. There are basically no differences between two Windows applications sharing a GPU versus a Linux and a Windows application sharing the same GPU. If a Linux application is alone on a GPU, it can consume all its resources!
              They later state how they need special driver support and have been working with their partners to enable that. Presumably that's all three Intel, AMD, Nvidia, but such functionality may be locked to Windows drivers instead of making the feature available to Linux? It's better than partitioning a GPU like Intel currently is (unless you need to specifically isolate/control resource allocation).

              Comment


              • #8
                Originally posted by polarathene View Post
                They later state how they need special driver support and have been working with their partners to enable that. Presumably that's all three Intel, AMD, Nvidia, but such functionality may be locked to Windows drivers instead of making the feature available to Linux? It's better than partitioning a GPU like Intel currently is (unless you need to specifically isolate/control resource allocation).
                Maybe after some time the Linux virtualization will use the Microsoft's <strike>(open source)</strike> kernel driver to pass GPU commands to the Host. Similarly like VirtualBox was unable to keep developing its 3D acceleration, so after some time they thew it away and used the VMware's kernel driver and their mechanism of communication between Host and Guest (the Guest part is done by VMware, so only half work).

                EDIT: The Microsoft's code communicating with Windows Host is closed source
                Originally posted by https://devblogs.microsoft.com/directx/directx-heart-linux/
                libd3d12.so and libdxcore.so are closed source, pre-compiled user mode binaries that ship as part of Windows.
                Last edited by Ladis; 03 February 2021, 11:01 PM.

                Comment

                Working...
                X