Announcement

Collapse
No announcement yet.

Intel Begins Sorting Out SR-IOV Support For The Xe Kernel Graphics Driver

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by qarium View Post
    why not send this 100 dollar to the project who implement this feature in software with vulkan ?
    Because I was not aware of that effort, having given up on fractional GPU virtualization. I will take a look at that. If it looks viable, I will indeed throw some appropriate monetary units their way.

    Comment


    • #22
      FWIW, somehow WSL2 (WSLg, rather) is fractionalizing the GPU. I've never taken the time to figure out how they're doing it. Having read this thread, I suspect it is an all-software emulation.

      Comment


      • #23
        Originally posted by sharpjs View Post
        Because I was not aware of that effort, having given up on fractional GPU virtualization. I will take a look at that. If it looks viable, I will indeed throw some appropriate monetary units their way.
        virtio-gpu's virgl venus backend is for vulkan support. I don't think anyone is working on a windows driver for vulkan, but iirc someone is for opengl driver, maybe they could be motivated to work on a vulkan driver too? pr in question for opengl driver is here. https://github.com/virtio-win/kvm-gu...ndows/pull/943

        Originally posted by sharpjs View Post
        FWIW, somehow WSL2 (WSLg, rather) is fractionalizing the GPU. I've never taken the time to figure out how they're doing it. Having read this thread, I suspect it is an all-software emulation.
        not really software emulation..? it's actually a feature baked into dx12​ so like, yes and no. wslg is d3d12 passthrough. so a lot alike venus

        EDIT: appendage, the PR also adds support for d3d10umd
        Last edited by Quackdoc; 12 November 2023, 11:08 AM.

        Comment


        • #24
          Originally posted by Quackdoc View Post
          pr in question for opengl driver is here. https://github.com/virtio-win/kvm-gu...ndows/pull/943
          Nice! Last time I checked, GPU virtio for Windows was just someone's seemingly-abandoned proof-of-concept code that wasn't ready for real use. I've bookmarked this PR and repo and will keep a keen eye on it. Thank you!

          Comment


          • #25
            Originally posted by pong View Post
            I don't believe you're accurate about that.
            THE MOST for an accelerator card for those very things; we deserve better in 2023/4.
            Yeah a small tweak to the drivers and it'd probably "just work" but it should be supported in a standard, standards based (SR-IOV) way by the GPU manufacturers.
            of course we deserve better in 2023/2024 but it looks like intel was not our savior...

            i say we will never change the GPU industry if we do not support opensource hardware like libre-SOC...
            Phantom circuit Sequence Reducer Dyslexia

            Comment


            • #26
              Originally posted by sharpjs View Post
              Because I was not aware of that effort, having given up on fractional GPU virtualization. I will take a look at that. If it looks viable, I will indeed throw some appropriate monetary units their way.
              you can find it here: https://www.collabora.com/news-and-b...vulkan-driver/

              its called the QEMU Venus Vulkan driver for VirtIO-GPU​

              this is software solution wht does not use SR-IOV

              ​if more people supports this the hardware companies like intel or amd or nvidia will feel pressure to enable SR-IOV on all hardware
              Phantom circuit Sequence Reducer Dyslexia

              Comment


              • #27
                Originally posted by timofonic View Post

                GVT-g was a good concept. Despite it would need to evolve and improve, it's a lot better than current situation in consumer GPUs.

                Intel contrivutes, but sometimes hinders progress too.

                I used GVT-g some years ago. It wasn't perfect, but worked well for my usecases.
                I like it, but Intel seems to think it's not worth the cost of developing further. Maybe plans for proper hardware SR-IOV coming with the dGPUs played a role, similar to the old Chinese OpenCL stack getting abandoned for the Intel One stuff. Too bad it's not actually working.

                Comment


                • #28
                  Originally posted by binarybanana View Post

                  I like it, but Intel seems to think it's not worth the cost of developing further. Maybe plans for proper hardware SR-IOV coming with the dGPUs played a role, similar to the old Chinese OpenCL stack getting abandoned for the Intel One stuff. Too bad it's not actually working.
                  I'm becoming pessimistic in terms of GPU situation. Too much stagnation and overvalued products, innovation slowed down a lot and really good GPU hardware is really expensive.

                  Maybe both CPUs and GPUs need more aggressive competition instead current oligopoly theater.

                  Comment


                  • #29
                    Originally posted by timofonic View Post
                    I'm becoming pessimistic in terms of GPU situation. Too much stagnation and overvalued products, innovation slowed down a lot and really good GPU hardware is really expensive.
                    Maybe both CPUs and GPUs need more aggressive competition instead current oligopoly theater.
                    exactly ... Intel did proof to us that they have no intention to change the market. intel smelled money and they want to get paid for workstation PRO cards to...
                    this is not how we get SR-IOV everywhere...

                    i see only 2 possibilities how we can fix this (to force intel,amd,nvidia to bend their knee to our will)

                    one stradegy i see is by improve "QEMU Venus Vulkan driver for VirtIO-GPU​​" if we get this improved then their selling point of "SR-IOV" will go to zero.

                    the other way i see is open-source GPUs like Libre-SOC with real opensource hardware they can not turn of this feature.

                    Phantom circuit Sequence Reducer Dyslexia

                    Comment


                    • #30
                      Yes, I think it's increasingly important and desirable to have open hardware architectures / designs.
                      Though the major problem with that is that even though it may be possible to design a decent computer / peripheral that is open
                      the ability to make the chips to implement it will remail very closed so then one still is at the mercy of the IC fabricators to make something
                      that has good availability, good value, and is also trustable since one cannot know what may be different between the "open design" vs.
                      what ends up in the fabricated chip.

                      I think with respect to GPUs, though, that it is ridiculous to depend on "toy" GPUs for the foundation of both our 2D/3D graphics processing capabilities but also our HPC, SIMD, parallel, ray tracing, tensor operations, AI/ML operations, and high RAM bandwidth general purpose computing.

                      Ok there is a market segment that is not totally mainstream (grandparent on their chrome book / cell phone / small laptop / tablet) but is very prominent (gamers, creative consumers, developers, people using graphic / compute intense productivity tools, people using AI/ML, etc.) that routinely willingly will spend $500-$2000 MORE than the cost of a basic "powerful desktop computer" to get the capabilities that a GPU offers.
                      Mainly those capabilities are (A) high RAM bandwidth (e.g. ~1TBy/s more or less), and (B) highly parallel integer / FP computations (e.g. 4k SIMD ALU processors more or less), and (C) accelerating architectural elements for things like tensor / matrix / vector / ray tracing / AI-ML operations, and finally (D) just several actual display interfaces that can scan out frame buffer contents (HDMI, displayport, etc.).

                      Any ray-tracing cores along with item (D) (the actual literal frame buffer DMA output to multiple display port / HDMI interfaces are the ONLY things that are really specifically / mainly anything to do with actual graphics / display interfacing.

                      All of the rest of the GPU functions (in a modern programmable shader pipeline GPU) are actually just either high bandwidth RAM interfaces, parallel programmable processor cores or are COMPUTE specific acceleration cores (tensor / AI-ML / etc.) -- all of which don't have any real "reason" to be associated with a GPU as opposed to being part of the core compute / memory architecture of the "core computer".

                      It should be obvious that the *GRAPHICS* specific parts (display interfaces, frame buffer DMA, maybe some ray tracing H/W) of a GPU are pretty insignificant in technology compared to the REST of what's in a modern mid-range or higher GPU, so if one is talking about a $500 GPU surely the "display interface and graphics specific" stuff is more suitably apportioned to be 20% of that cost, probably less.

                      So therefore since there's obviously DEMAND (by the many ~millions / year), there's obviously precedent (people want the AI/ML, compute, high RAM bandwidth, graphics and non-graphics purpose COMPUTE capabilities of a modern GPU), and there's NO END IN SIGHT (people will want more and more AI/ML, graphics processing until there's real time fluid rendered true photorealistic VR / AR / synthetic holograms etc.),
                      the elephant in the room question which few people seem to be focusing on is *WHY* are 80% of the RAM / COMPUTE / math / SIMD / parallel capabilities associated with GPUs NOT actually intrinsically made foundational in mid/high range consumer desktop workstation ARCHITECTURES for CPU / RAM / chipset / motherboard INSTEAD of being added on lumped into a "toy" GPU?

                      Moore's law has enabled us to have desktop PCs with 0.5/1/2-TBy/s access to many gigabytes of RAM, massively parallel int/FP ALUs, 2k-6k of them, and acceleration cores for tensor/matrix/vector/AI-ML operations that have 1-to-many TOPS/s performance and all able to be found in a typical well equipped "teenage gamer's" gaming desktop with a $500-$1000 GPU.

                      But the form factors are ridiculous (just TRY having / using more than one or maybe two PCIE x16 slots if you've got a 2.5-3+ slot GPU, fans, cables, ...). The mechanical & power architecture is deplorable (melting / igniting cables & connectors when using top of the line modern gear, kilo-Watt+ PSUs and GPUs and cables that don't even FIT right into almost any case). Major lack of PCIE lanes / slots one can actually use.
                      Artificial limitations like no SR-IOV, short / bad GPU warranties, GPUs not designed for maintenance / quality / long life (fans, thermal solution issues, easy access to clean / replace parts like fans), vendors that don't even support their "consumer" cards for their compute / ML libraries (e.g. AMD RDNAx vs ROCm), virtualization / sharing that is completely non-existent.

                      So instead of anemic 2-channel DDRx interfaces, CPUs with ONLY ~16 cores, motherboards with ONLY 4-DIMM slots which you're lucky if you can even USE all four without problems / trade-offs, SIMD/vector stuff built into the CPUs that looks like a pathetic toy compared to the capability of a small GPU (AVX-512, NEON, ...), why not move some of this high bandwidth RAM and high performance compute / AI-ML / massively parallel SIMD stuff into the holistic core machine architecture where it belongs. Properly integrate virtualization so EVERYTHING virtualizes, shares well. Design the form-factors, sockets, etc. so we get real expandability and scalability back without insane physical / mechanical / electrical compromises.

                      Fine keep chromebooks, laptops, entry level ryzen / intel desktops / laptops as they are for that low end "don't need a DGPU anyway" market.
                      But anything higher that WOULD have a $500-$2000 DGPU really needs an architectural overhaul for sanity, quality, usability, and holistic "it all works together" sake.

                      It is particularly ironic that AMD & Intel who make almost all the CPUs out there do for every single one of their processors in the last N generations include MMUs and several other virtualization technologies, IOMMU, etc right in their CPU & system chipsets even for the least cost entry level CPUs these days intended for consumer markets.

                      But those SAME COMPANIES make DGPUs that have the most virtualization / resource sharing / resource isolation hostile HW / SW stack possible for their GPU products sold to the same consumer desktop market, it makes no sense to have such a bipolar attitude to what
                      should be a uniform "everything should be able to be isolated / secure for multi-process / multi-user / multi-level security use, everything should be able to be virtualized" architecture.

                      Even cell phones these days have tensor / vector / AIML acceleration cores built right into the CPU but somehow the desktop architecture is
                      not even remotely holistically updated every couple of decades to follow the "it should have been done since 2010" scaling of what's now
                      a "GPU" technology into the core form factor / chipset / motherboard / CPU / memory architecture of the desktop / workstation / server.


                      Originally posted by qarium View Post

                      of course we deserve better in 2023/2024 but it looks like intel was not our savior...

                      i say we will never change the GPU industry if we do not support opensource hardware like libre-SOC...

                      Comment

                      Working...
                      X