Announcement

Collapse
No announcement yet.

Intel Begins Sorting Out SR-IOV Support For The Xe Kernel Graphics Driver

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    It is far past time for "consumer" GPUs to support SR-IOV and virtualization in LINUX & ms-windows.

    The basic use case is of course running a couple VMs or whatever that actually have desktop GUIs and basic GPU functionality.
    Basically any desktop OS one might run in a VM is effectively going to require a GPU either for the desktop itself to work half way decently or at least
    for the multitude of basic productivity applications which will require GPU graphics / compute functions for their UIs / functions.

    We've got mainstream consumer 8+ core fast CPUs, almost any system will have 16 if not 32-64 GBy RAM or more, and terabytes of fast SSD.

    Sandboxing and virtualizing are pretty mainstream approaches for security, compatibility, setting up specific execution environments for something without
    messing up / reconfiguring the host, testing, backward compatibility, etc. etc.

    There's absolutely nothing "out of the consumer realm" about running a couple of VMs if one excludes people that only use smartphones or chromebooks or similar, but looking at midrange & higher end desktop users like people who'd be the MARKET for an expensive $300+ DGPU it is a slap on the face to not let them use their
    super powerful desktop PCs and super powerful current generation GPU to run the odd VM or two as needed / desired.

    Comment


    • #12
      Originally posted by pong View Post
      It is far past time for "consumer" GPUs to support SR-IOV and virtualization in LINUX & ms-windows.

      The basic use case is of course running a couple VMs or whatever that actually have desktop GUIs and basic GPU functionality.
      Basically any desktop OS one might run in a VM is effectively going to require a GPU either for the desktop itself to work half way decently or at least
      for the multitude of basic productivity applications which will require GPU graphics / compute functions for their UIs / functions.

      We've got mainstream consumer 8+ core fast CPUs, almost any system will have 16 if not 32-64 GBy RAM or more, and terabytes of fast SSD.

      Sandboxing and virtualizing are pretty mainstream approaches for security, compatibility, setting up specific execution environments for something without
      messing up / reconfiguring the host, testing, backward compatibility, etc. etc.

      There's absolutely nothing "out of the consumer realm" about running a couple of VMs if one excludes people that only use smartphones or chromebooks or similar, but looking at midrange & higher end desktop users like people who'd be the MARKET for an expensive $300+ DGPU it is a slap on the face to not let them use their
      super powerful desktop PCs and super powerful current generation GPU to run the odd VM or two as needed / desired.
      I agree with you completely!

      bridgman agd5f : Are there plans for SR-IOV and other related virtualization technologies to be supported by consumer GPUs both on Linux and Windows?
      Last edited by timofonic; 11 November 2023, 10:36 PM.

      Comment


      • #13
        I'd pay $100 to unlock SR-IOV in my Radeon card so I could do VM stuff with it.

        Comment


        • #14
          Originally posted by Quackdoc View Post
          it's a shame this doesnt seem to be comming for the DG2 gaming cards. Intel had a large chance to really stand out here. This is a really big shame they didn't enable it there (maybe they will in the future?) I guess ill wait for workarounds to come out like nvidia then.
          now Quackdoc i did catch you again... you told me everything is fine with the driver of your intel gpu hardware.

          just remember this:
          Originally posted by Quackdoc View Post
          Im literally using it now you absolute mongoloid and it is working perfectly fine for me, intel is addressing the sparse residency support and I think parts of it have even landed? not too sure on that one. However maybe if you had less of a hate boner for me, you could post something that actually makes sense for once.
          no shit I get aggressive when you constantly actively lie about me


          no Quackdoc i just follow you and find more and more dirt on intel.

          it looks like intel is not the dreamland of GPU all the people did hope.

          i give you good advice and for other people to: if you really want change something in the GPU market you do not need 3. player like intel you need open-source GPU design like Libre-SOC...

          Phantom circuit Sequence Reducer Dyslexia

          Comment


          • #15
            Originally posted by timofonic View Post
            I agree with you completely!
            bridgman agd5f : Are there plans for SR-IOV and other related virtualization technologies to be supported by consumer GPUs both on Linux and Windows?
            they answered this multible times in the past and the answer was always: "NO"

            and the reason is tranistor count this features eats up a lot of tranistors to the point that the chip is no longer competive in the market.

            there is another reason why the answer is no because a software emulation of SR-IOV writen in vulkan is possible.

            if you only drive 2-3 virtual machines this overhead of the software emulation will not have relevant impact.

            just remember SR-IOV is for something like 32 instances of virtual machines...
            Phantom circuit Sequence Reducer Dyslexia

            Comment


            • #16
              Originally posted by sharpjs View Post
              I'd pay $100 to unlock SR-IOV in my Radeon card so I could do VM stuff with it.
              why not send this 100 dollar to the project who implement this feature in software with vulkan ?
              Phantom circuit Sequence Reducer Dyslexia

              Comment


              • #17
                I don't believe you're accurate about that.

                For one, consider the intel IGPUs that have supported SR-IOV for years -- they've got similar architectures to the intel ARC/DG2 DGPUs to the point where they use the SAME LINUX DRIVER i915 and they work with SR-IOV, so there is no big reason that is clear the DG2 cannot.

                For two, consider that I think it may well be the case that intel, nvidia, and AMD may have or have had "workstation" / "data center" line "low-end" GPUs
                which use basically the same architecture as some of their consumer line GPUs, in fact IIRC they use exactly the same GPU silicon die version in some cases.
                I haven't double-checked that but I've heard so many things over the past few years about the similarities or even correspondences that I think I'm correct.
                So that implies there is no difference or no significant difference in the GPU die silicon architecture / area / design between discrete GPUs that have supported SR-IOV and ones that do not.

                For three as I recall I think the SR-IOV implementation at the PCIE level is more about keeping some few different control/status registers associated with each different function device so that the driver and hardware context can distinguish the important control/status registers for each function device but I don't think it really implies much more than that at the architectural level. One doesn't need N times the VRAM for N functions or N times the shader processors for N functions or anything that impactful of resources. Yes when you share the GPU device core resources like memory and processor utilization you're consuming resources that can't be fully utilized by the host or other functions. In this analogous way it is not necessarily conceptually unlike a normal VM -- you allocate N cores to a VM, you have N less cores available to the host or other VMs; you allocate N GB RAM to a VM, you have N less GB available for other uses, etc. Or a 10Gb/s NIC, it's still just 10Gb/s sharing one physical medium and bit rate capacity, the only difference is you can have pointers and interrupt status registers or whatever that allow different functions to have different TX/RX DMA buffer ring configurations to point to different areas in host RAM and different interrupts so segregate those to the particular guest function driver's attention. Otherwise send a packet and it goes on the same wire, receive a packet and it goes to whatever queue you've pointed that VLAN / address / whatever to receive into. So there is exceedingly little necessary hardware overhead. Similar to a small MMU on a CPU being enough to allow large numbers of VMs though it's not even to the extent of a MMU.
                And since PCIE registers are all pretty much living in the PCIE BAR MMIO areas of the PCI address space they're not necessarily even consuming on-card resources more like they just exist in a chunk of host or device memory that would be present anyway for a single function device and mapped to handle various data structures living in that basically shared address space.

                Whether software emulation is "possible" misses the point. Sure I can have just a big blob of RAM and software emulate a whole GPU up to the OpenGL / Vulkan / OpenCL / whatever APIs, great, it'll be slow as hell but it'll work. The REASON we BUY discrete GPUs is to accelerate graphics and compute tasks that would be unpleasantly slow if done by the host CPU, and we often pay MORE for the DGPU than we do for the host CPU, host RAM, and sometimes also more than the host motherboard and entire rest of the PC for the DGPU peripheral; it's one of the biggest if not THE biggest investment in a typical "gamer" / "mid-high performance" desktop.
                The reason we justify that cost is value accelerating the graphical / compute applications we care to run.
                Virtual machines to enable cross-platform or legacy application or other use-cases to run graphical / compute applications / OSs and have THEIR graphics / compute functions be fast is ALSO a use case we card about and EXPECT our costly fancy high tech GPU to accelerate just like ANY OTHER graphics / compute application we want to run in whatever OS (linux, windows) we want to run.

                Every single other common piece of consumer PC hardware or subsystem currently virtualizes / shares smoothly and well --
                * CPU cores / threads
                * RAM allocations
                * Keyboard / mouse I/O
                * Even sound to a general purpose extent
                * Printing
                * USB device pass through or emulation / virtualization
                * Disc drives / mass storage
                * Networking to a general basic extent (though it's really tragic we're "stuck" on 1Gb/s ethernet vs. say 10Gb in the consumer workstation / SMB realm still)

                And what is the biggest pain point of total and complete failure to share / virtualize well? GPU compute / graphics, but that's also the area where we're paying
                THE MOST for an accelerator card for those very things; we deserve better in 2023/4.
                Yeah a small tweak to the drivers and it'd probably "just work" but it should be supported in a standard, standards based (SR-IOV) way by the GPU manufacturers.


                Originally posted by qarium View Post

                they answered this multible times in the past and the answer was always: "NO"

                and the reason is tranistor count this features eats up a lot of tranistors to the point that the chip is no longer competive in the market.

                there is another reason why the answer is no because a software emulation of SR-IOV writen in vulkan is possible.

                if you only drive 2-3 virtual machines this overhead of the software emulation will not have relevant impact.

                just remember SR-IOV is for something like 32 instances of virtual machines...

                Comment


                • #18
                  Originally posted by qarium View Post
                  now Quackdoc i did catch you again... you told me everything is fine with the driver of your intel gpu hardware.

                  just remember this:


                  no Quackdoc i just follow you and find more and more dirt on intel.

                  it looks like intel is not the dreamland of GPU all the people did hope.

                  i give you good advice and for other people to: if you really want change something in the GPU market you do not need 3. player like intel you need open-source GPU design like Libre-SOC...
                  this is completely irrelevant, AMD doesnt have sriov either you fuckwit. Even Nvidia requires unofficial drivers.

                  Comment


                  • #19
                    Originally posted by pong View Post
                    I don't believe you're accurate about that.

                    For one, consider the intel IGPUs that have supported SR-IOV for years -- they've got similar architectures to the intel ARC/DG2 DGPUs to the point where they use the SAME LINUX DRIVER i915 and they work with SR-IOV, so there is no big reason that is clear the DG2 cannot.

                    While "GVT-g" (as Intel calls it) works well on some older iGPU generations, the implementation is at least partially, if not mostly software based. The hardware itself doesn't really know much about how it's being split up and doing it requires cooperation between both host and guest drivers. This was originally done by Chinese employees and sadly like a lot of their other open source work it got abandoned eventually (lack of industry demand in this case).
                    Last edited by binarybanana; 12 November 2023, 02:42 PM.

                    Comment


                    • #20
                      Originally posted by binarybanana View Post

                      While "GVT-g" (at Intel calls it) works well on some older iGPU generations, the implementation is at least partially, if not mostly software based. The hardware itself doesn't really know much about how it's being split up and doing it requires cooperation between both host and guest drivers. This was originally done by Chinese employees and sadly like a lot of their other open source work it got abandoned eventually (lack of industry demand in this case).
                      GVT-g was a good concept. Despite it would need to evolve and improve, it's a lot better than current situation in consumer GPUs.

                      Intel contrivutes, but sometimes hinders progress too.

                      I used GVT-g some years ago. It wasn't perfect, but worked well for my usecases.

                      Comment

                      Working...
                      X