RadeonSI Lands Bits In Mesa 20.2 For Better Dealing With GPU Virtualization

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts
  • artivision
    Senior Member
    • Apr 2011
    • 1181

    #11
    Originally posted by agd5f View Post

    You can write something today using drm leases to set up independent displays on different processes and you can use render nodes to submit work to the GPU from different processes. The plumbing is all there on the kernel side.
    Well render nodes tech is not needed, if you run two different 3D windowed games on your OS both will run with half fps. The problems here are two: First if both are demanding, without a hardware solution resources will be randomly consumed doing flashes and lagging. Second how will you connect those windows with two displays wile the inaccessible gpu microcode will try to stop you? I'm guessing that a mediocre solution could be an extended display on two monitors and two borderless game windows side by side (exclusive full screen is out of the question). Although Amd could just solve this by let their gpu pass as two or three gpus and every instance can get different display output. That way the server model isn't threatened and Amd will benefit from selling stronger gpus and cpus.

    Also i do think that the person above speak for the mess Amd is on.
    Last edited by artivision; 22 July 2020, 11:00 PM.

    Comment

    • agd5f
      AMD Graphics Driver Developer
      • Dec 2007
      • 3939

      #12
      It's still one GPU shared with multiple processes whether you use SR-IOV or not. SR-IOV works by time slicing the GPU. If your job doesn't finish in time, you get preempted. Same preemption you can do on bare metal.

      I'm not really following your example. You can start multiple processes and each one can lease a display and submit work to a render node independently.

      Comment

      • boxie
        Senior Member
        • Aug 2013
        • 1932

        #13
        agd5f If I may ask a simple question.

        Will this allow me to start a Windows VM and pass my GPU (vega64) through to it and have Linux+Windows share the same GPU (and thus getting lots of perf).

        (If I understand correctly, each would need to use a separate render node, but both would get GPU accel?)

        (with the end goal of being able to game in a VM)

        Comment

        • agd5f
          AMD Graphics Driver Developer
          • Dec 2007
          • 3939

          #14
          Originally posted by boxie View Post
          Will this allow me to start a Windows VM and pass my GPU (vega64) through to it and have Linux+Windows share the same GPU (and thus getting lots of perf).
          When you say "this" what are you referring to? SR-IOV is a feature on special headless datacenter cards which expose the some aspects of the GPU as multiple virtual functions which can be passed through to guest virtual machines. The GPU is time sliced between the virtual functions. If you were to get an SR-IOV capable card, you could start multiple virtual machines on it each with a GPU virtual function passed through to it and load whatever OS you wanted in the virtual machine. There are no displays. If you wanted to see the rendered content from the virtual machine, you'd need to capture it and send it to whatever host was connected to the virtual machine. Think desktop as a service or cloud gaming type use cases. GPU renders in the cloud, content is streamed to users.

          Comment

          • bridgman
            AMD Linux
            • Oct 2007
            • 13188

            #15
            Originally posted by boxie View Post
            agd5f If I may ask a simple question.

            Will this allow me to start a Windows VM and pass my GPU (vega64) through to it and have Linux+Windows share the same GPU (and thus getting lots of perf). (If I understand correctly, each would need to use a separate render node, but both would get GPU accel?)
            You would need something like Virgil to get drawing commands from guest to host, but otherwise I think it should be do-able. If on the other hand you are thinking about having two kernel drivers (one on the host, one on the guest) sharing the GPU I don't think that is supported yet.

            EDIT - I had assumed that you were referring to the non-virtualization solution that agd5f described on the previous page with virtualization added back (VM guest + host sharing the GPU), but looking back at the thread I have to second agd5f's "what do you mean by <this> ?" question.
            Last edited by bridgman; 23 July 2020, 01:55 PM.
            Test signature

            Comment

            • boxie
              Senior Member
              • Aug 2013
              • 1932

              #16
              Originally posted by agd5f View Post

              When you say "this" what are you referring to? SR-IOV is a feature on special headless datacenter cards which expose the some aspects of the GPU as multiple virtual functions which can be passed through to guest virtual machines. The GPU is time sliced between the virtual functions. If you were to get an SR-IOV capable card, you could start multiple virtual machines on it each with a GPU virtual function passed through to it and load whatever OS you wanted in the virtual machine. There are no displays. If you wanted to see the rendered content from the virtual machine, you'd need to capture it and send it to whatever host was connected to the virtual machine. Think desktop as a service or cloud gaming type use cases. GPU renders in the cloud, content is streamed to users.
              and bridgman

              Thank you both for taking the time to answer my question. I ask because the article says
              This requires SR-IOV or using amdgpu.mcbp=1 with the kernel
              which implies that it works *without* SR-IOV - and hence my question.

              With agd5f mentioning on the other page about using multiple render nodes it got me wondering with the code that this article is about if it was now possible to share a GPU between Host/Guest (with a guest identifying the card properly and not a virtualised card).

              if the article should say "requires both SR-IOV AND using amdgpu.mcbp=1" it would make much more sense given your answers

              The end goal would be able to spin up a VM and share the GPU between Host/Guest so that games/software that don't like/run well under wine could easily be shared (and be identified correctly in the VM)
              Last edited by boxie; 23 July 2020, 01:20 AM.

              Comment

              • Jabberwocky
                Senior Member
                • Aug 2011
                • 1211

                #17
                This reminds me of sharing the family computer between siblings. Our mid-command buffer preemption was cutting power to the machine, our accuracy was down to the second.

                Does mirroring registers introduce any performance issues? I'm making the assumption that memory is slower than the registers.

                I'm also curious if it is possible to support GFX8/Polaris? Plans to do so or time-frames doesn't matter, just wondering if there are any "show stoppers".

                Comment

                • geearf
                  Senior Member
                  • Dec 2011
                  • 2151

                  #18
                  Originally posted by caligula View Post
                  Correct me if I'm wrong but dual GPUs used to be quite common for gaming.
                  When was that?

                  I know of SLI and CrossFire, but I have never seen them be that common.

                  Comment

                  • Danny3
                    Senior Member
                    • Apr 2012
                    • 2416

                    #19
                    Originally posted by geearf View Post

                    When was that?

                    I know of SLI and CrossFire, but I have never seen them be that common.
                    Indeed!
                    I am a gamer too, but I have never bought 2 GPUs for this.
                    I would rather buy a GPU 2x more powerful than buying 2 GPUs.
                    I don't think 2 GPUs can ever match the power efficiency and noise of one, plus it will make small cases too crowded.

                    Maybe is advantageous for compute, if AMD fixes the damn ROCm installation on current distributions.

                    Comment

                    • agd5f
                      AMD Graphics Driver Developer
                      • Dec 2007
                      • 3939

                      #20
                      Originally posted by boxie View Post

                      and bridgman

                      Thank you both for taking the time to answer my question. I ask because the article says

                      which implies that it works *without* SR-IOV - and hence my question.

                      With agd5f mentioning on the other page about using multiple render nodes it got me wondering with the code that this article is about if it was now possible to share a GPU between Host/Guest (with a guest identifying the card properly and not a virtualised card).

                      if the article should say "requires both SR-IOV AND using amdgpu.mcbp=1" it would make much more sense given your answers

                      The end goal would be able to spin up a VM and share the GPU between Host/Guest so that games/software that don't like/run well under wine could easily be shared (and be identified correctly in the VM)
                      There is no way to share a GPU between the host and a guest using standard driver stacks whether you are using SR-IOV or not. You'd need some sort of para-virtualized solution like virgil. Mid command buffer preemption (mcbp) has nothing to do with virtualization per se. It happens to be used by SR-IOV indirectly, but you can use it on bare metal as well to pre-empt work on the GPU.

                      Comment

                      Working...
                      X