Announcement

Collapse
No announcement yet.

AMD Working On VirtIO GPU & Passthrough GPU Support For Xen Virtualization

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Danny3 View Post
    Playing games is a server feature?
    Who the fuck said that proper virtualization is a server feature only and not a consumer one too?
    If I want to run Adobe's Photoshop, Microsoft's Office, MPC-HC+MadVr with HDR passhtrought does that looks to you like I'm caring about servers or systems administration?
    I just want to use the hardware that I payed in every way that see it fits what I'm trying to achieve.

    If I want to share my CPU or GPU with a VM, then let me do it and stop with the artificial crap limitations!
    MxGPU/SR-IOV is in AMD professional/ Workstations cards. This VirtIO/GPU and Passthrough GPU work is for cards without MxGPU/SR-IOV.

    SR-IOV does require extra silicon. Area of silicon does have a price tag. Problem is w6600 and w6400 is not that optimized for gaming and no hdmi ports..

    Comment


    • #12
      Originally posted by Velocity View Post
      This artificial limitation of SR-IOV is just a symptom of growing power of capitalism over free and liberal society models. I can design and operate open source systems, but the hardware consists of technology hostile to the user because it defends the rights of capitalist stake holders (DRM) and not the rights of the user of the technology.
      The SR-IOV thing is not a artificial limitation in the GPU implementing SR-IOV costs silicon area. Then you need motherboard support.

      VirtIO GPU and Passthrough GPU support for Xen is about providing support without the silicon cost. Yes not having SR-IOV hardware there is going to be slightly more overhead(so slightly slower) than a SR-IOV solution.

      Consumer GPU give up SR-IOV functionality to have some more silicon for a few more gpu cores to process output. It could work out the performance overhead of doing stuff without SR-IOV could be offset by the extra processing power consumer GPU get by not having SR-IOV.

      Benchmarks when this stuff gets more developed will be interesting.

      Comment


      • #13
        Originally posted by cb88 View Post
        and full passthrough (host can't use it at all).
        Exactly what I'm using.
        Both those methods should result in near zero overhead and full performance not just "good" performance.
        No, it is a little less than 100% performance and also worse latency (probably from the emulation part of qemu and beeing dependend on a host OS). Also SRIOV should give even a little less performance, because your GPU needs to display the host desktop.

        Originally posted by Quackdoc View Post
        if it's gpu passthrough, that is very much not an option for many people.​
        If it is because of only one GPU there are solutions: https://github.com/joeknock90/Single-GPU-Passthrough

        Not that I wouldn't like the convenience of SR-IOV, but we won't get that in the foreseeable future.​

        Comment


        • #14
          Originally posted by Danny3 View Post

          Playing games is a server feature?
          Who the fuck said that proper virtualization is a server feature only and not a consumer one too?
          If I want to run Adobe's Photoshop, Microsoft's Office, MPC-HC+MadVr with HDR passhtrought does that looks to you like I'm caring about servers or systems administration?
          I just want to use the hardware that I payed in every way that see it fits what I'm trying to achieve.

          If I want to share my CPU or GPU with a VM, then let me do it and stop with the artificial crap limitations!

          What's the problem with that?

          If I were to buy a knife, fork, spoon, hammer and use them for other purposes than they were intended, I should've payed extra?
          Do you pay extra if you use your non-off-road ready card somewhere off-road and would you agree to pay extra for that?
          the problem is that it takes people time and money to work on these features, playing games is not a server feature, multiple operating systems using your GPU on the other hand is it costs time and money to support and develop this feature, (and same with many of the other enterprise features). just do single gpu passthrough. this is how AMD/Nvidia are reaping the rewards for the financial effort they spend to develop and support these features. by making people pay extra to support them, so if you want them, then pay for it.

          however, evidently, you DIDN'T pay for the features, you paid for the features that gpu_manufacturer_x exposes for you. in whatever form it was. Don't blame AMD or nvidia because you use linux, all those features you stated are available to you via single gpu passthrough, or via, just running windows. the way that you act as if the lack of SR-IOV is stopping you from doing those is foolish. it isn't at all, it just makes it either less convenient, or less preformant. In either regards, neither of that is AMD's fault.

          maybe one day, sr-iov will become a common place, maybe someone will take the lead and start pumping out these gpus, then gpu_manufacturer_x will need to take a hit to keep competitive. it's happened in the past, it will happen in the future.

          for me, I dont even see a need for sr-iov anymore in consumer land, I would myself, rather see virtio-venus and/or native context support in linux

          Comment


          • #15
            Originally posted by Anux View Post
            If it is because of only one GPU there are solutions: https://github.com/joeknock90/Single-GPU-Passthrough

            Not that I wouldn't like the convenience of SR-IOV, but we won't get that in the foreseeable future.​
            It highly depends who is motivated to work on the features. I think vulkan venus is probably suitable enough for many people. I think it will be good enough for basic VDI and possibly even cloud gaming usage. native context I think would be ideal, and I can see some vendors trying to invest in it for windows support.

            EDIT: I am intimately familiar with single gpu passthrough, I manage multiple computers for people who use this but it can be buggy and it's not great, I would like to see wayland allow lossing its only gpu and falling back to software mode, then picking it back up when the gpu is freed, before I reccomend this.

            KDE handles gpu hotplug elegantly for secondary gpus at least
            Last edited by Quackdoc; 14 March 2023, 05:04 AM.

            Comment


            • #16
              Originally posted by Quackdoc View Post
              It highly depends who is motivated to work on the features. I think vulkan venus is probably suitable enough for many people. I think it will be good enough for basic VDI and possibly even cloud gaming usage. native context I think would be ideal, and I can see some vendors trying to invest in it for windows support
              native context support for VirtIO comes out of google developers for chrome os because vulkan venus and the virgl​ end up with a few problems once you start using it a lot issue being how much communication those need to work..

              Windows support that up in the air. Windows support could remain a pro card.
              Last edited by oiaohm; 14 March 2023, 06:10 AM.

              Comment


              • #17
                Originally posted by Quackdoc View Post
                maybe one day, sr-iov will become a common place, maybe someone will take the lead and start pumping out these gpus, then gpu_manufacturer_x will need to take a hit to keep competitive. it's happened in the past, it will happen in the future.
                Maybe it won't either. sr-iov has a silicon cost area cost and a silicon integration performance cost. We have not had native context vs sr-iov. There is a horrible chance that the cost of sr-iov silicon area covers the native context cost for 1 to 2 vms. Yes silicon area put into sr-iov could also be put into GPU compute units.

                Sr-iov once you include that in hardware no matter the driver you use you will be paying some cost for going though the sr-iov hardware. Big thing about virtio native context is this is not a silicon cost. Virtio native context is a driver cost/software running cost so if you have a system without Virtio native context support in software you are not paying this overhead.

                Yes there will be a point for sure where sr-iov is cheaper than using virtio native context.

                The issue for a computer that only going to run 1 OS with no VMs sr-iov makes no sense because this would be using silicon area and paying a performance cost for a feature those users are never going to use. Sr-iov like it or not most likely for at least a percentage of consumer cards will never make sense. It great that we now have a software option for those consumer cards where Sr-iov does not make sense.

                Comment


                • #18
                  Originally posted by oiaohm View Post

                  Maybe it won't either. sr-iov has a silicon cost area cost and a silicon integration performance cost. We have not had native context vs sr-iov. There is a horrible chance that the cost of sr-iov silicon area covers the native context cost for 1 to 2 vms. Yes silicon area put into sr-iov could also be put into GPU compute units.

                  Sr-iov once you include that in hardware no matter the driver you use you will be paying some cost for going though the sr-iov hardware. Big thing about virtio native context is this is not a silicon cost. Virtio native context is a driver cost/software running cost so if you have a system without Virtio native context support in software you are not paying this overhead.

                  Yes there will be a point for sure where sr-iov is cheaper than using virtio native context.

                  The issue for a computer that only going to run 1 OS with no VMs sr-iov makes no sense because this would be using silicon area and paying a performance cost for a feature those users are never going to use. Sr-iov like it or not most likely for at least a percentage of consumer cards will never make sense. It great that we now have a software option for those consumer cards where Sr-iov does not make sense.
                  Originally posted by oiaohm View Post
                  native context support for VirtIO comes out of google developers for chrome os because vulkan venus and the virgl​ end up with a few problems once you start using it a lot issue being how much communication those need to work..

                  Windows support that up in the air. Windows support could remain a pro card.
                  there are indeed problems ofc, but it doesnt mean its all that bad, lower overhead will always be best, and native context will be the lowest, theortically should be lower then sriov afaik.​

                  the biggest issue with native context and venus that I can forsee is that there is no gpu cgroup equivalent afaik. which could be troublesome. but with that I think venus and native context will be a lot more viable for larger scale deployments. ofc sriov will always be superior where security need to be concerned as far as I am aware. though I admit I lack research into this area specifically.

                  Comment


                  • #19
                    Originally posted by Quackdoc View Post
                    there are indeed problems ofc, but it doesnt mean its all that bad, lower overhead will always be best, and native context will be the lowest, theortically should be lower then sriov afaik.​

                    the biggest issue with native context and venus that I can forsee is that there is no gpu cgroup equivalent afaik. which could be troublesome. but with that I think venus and native context will be a lot more viable for larger scale deployments. ofc sriov will always be superior where security need to be concerned as far as I am aware. though I admit I lack research into this area specifically.


                    There is quite a bit of difference between native context and venus. Venus has you going though the hosts vulkan driver layer. Native context you don't its guest opengl/vulkan so you are not mixing as much stuff.

                    Now if you are are enforcing security on native context as google developers will want to do there is going to be some overhead. This is why I am not sure that sriov will always be more expensive. CPU generated gpu cgroup equal to sriov for native context is not going to be without it cost yes this is more possible with native context than venus.

                    Open source qualcomm where native context comes from that hardware does not have sriov same with a lot of other smaller GPU vendors.

                    Venus/virgl will be for parties that don't implement native context or sriov as both of these options are going to have more overhead than native context and sriov, Native context will be for hardware that drivers support it and don't have sriov.

                    Remember hardware with sriov(mxgpu/gvt-g) you pay the sriov cost on gpu access be you using sriov or not and the cost does not increase this is why if the hardware has sriov you might as well use it.

                    sriov being a silicon area cost and latency cost for having it does mean it valid for their to be a class of GPU where it does not fit. Protection around native context is most likely going to come with a CPU usage cost.

                    Time will tell how all those costs balance out. I will not be suppressed if soc gpu like qualcom and rasberypi and so on end up native context no sriov. I would not be surprised if consumer AMD cards end up being native context only. 1 or 2 vm the CPU cost to make a cgroup by software for GPU could be very acceptable. Now for 10 to 20 vm very much not going to be.

                    I see native context and sriov splitting at consumer and professional cards for quite some time. We need sriov and native context implementations on some larger desktop GPUs to see what the costs are and how costly protections will be.

                    Remember modern GPU do have memory management units that help with protecting memory so one application cannot just snoop on another applications memory just because it is in the GPU. So doing a cgroup like GPU protection for something like native context is inside the features you find consumer AMD cards and SOC gpus. Yes the host OS will be able to snoop more than what sriov setup would be able to.

                    Yes question need to be answered does your consumer class really need the means to run massive number of vm or the max level protection sriov or is native context using the gpu include mmu features to prevent applications snooping on other applications good enough. I do suspect native context will be good enough as long as windows drivers get support as well.

                    Comment


                    • #20
                      Originally posted by Anux View Post
                      Exactly what I'm using.

                      No, it is a little less than 100% performance and also worse latency (probably from the emulation part of qemu and beeing dependend on a host OS). Also SRIOV should give even a little less performance, because your GPU needs to display the host desktop.


                      If it is because of only one GPU there are solutions: https://github.com/joeknock90/Single-GPU-Passthrough

                      Not that I wouldn't like the convenience of SR-IOV, but we won't get that in the foreseeable future.​
                      Intel already ships SR-IOV on all its GPUS...

                      Dedicating a full GPU to just a single VM sucks.

                      Every other solution... as I already freaking said. Requires overhead and by overhead I don't mean 1-2% I mean several tens of percent.

                      Comment

                      Working...
                      X