Announcement

Collapse
No announcement yet.

RadeonSI Lands Bits In Mesa 20.2 For Better Dealing With GPU Virtualization

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by agd5f View Post
    There just isn't a use case for it on bare metal at the moment. In theory the scheduler in the kernel driver could use it to pre-empt lower priority tasks.
    It should be useful for preventing greedy clients from starving the display server (at least with GPUs which don't have a separate high priority GFX ring for that).

    Comment


    • #22
      Originally posted by boxie View Post
      I ask because the article says ... which implies that it works *without* SR-IOV - and hence my question.

      With agd5f mentioning on the other page about using multiple render nodes it got me wondering with the code that this article is about if it was now possible to share a GPU between Host/Guest (with a guest identifying the card properly and not a virtualised card).

      if the article should say "requires both SR-IOV AND using amdgpu.mcbp=1" it would make much more sense given your answers
      For clarity, the feature described by the article (Mid Command Buffer Pre-emption or MCBP) can be used under virtualization or on bare metal, ie SR-IOV is not required. The key point though is that MCBP does not enable sharing on its own, it just provides a way to free up the GPU more quickly to improve quality of service when sharing.

      It's only when the discussion moved from MCBP to sharing a GPU between host and guest that the requirement for SR-IOV or something like Virgil came in.

      In case it helps, MCBP is the graphics equivalent of Compute Wave Save/Restore, which we use in the HSA/ROCm stack to allow task switching without having to wait for long-running waves to complete, since waves can run for hours or days.

      https://lists.freedesktop.org/archiv...er/016069.html
      Last edited by bridgman; 23 July 2020, 02:13 PM.
      Test signature

      Comment


      • #23
        Originally posted by agd5f View Post

        There is no way to share a GPU between the host and a guest using standard driver stacks whether you are using SR-IOV or not. You'd need some sort of para-virtualized solution like virgil. Mid command buffer preemption (mcbp) has nothing to do with virtualization per se. It happens to be used by SR-IOV indirectly, but you can use it on bare metal as well to pre-empt work on the GPU.
        Here we disagree. If your gpus is identified as two gpus then there isn't a reason for virgil or specialized drivers, they can eat two api instances. Second there is display support for those systems, that you referring is that usually server cards don't have display outputs at all, but that's different.

        Comment


        • #24
          Originally posted by artivision View Post
          Here we disagree. If your gpus is identified as two gpus then there isn't a reason for virgil or specialized drivers, they can eat two api instances. Second there is display support for those systems, that you referring is that usually server cards don't have display outputs at all, but that's different.
          There are no SR-IOV capable cards with display support. Only the the gfx/compute and multi-media blocks are virtualized.

          Comment


          • #25
            Originally posted by agd5f View Post

            There are no SR-IOV capable cards with display support. Only the the gfx/compute and multi-media blocks are virtualized.
            I just said that. SR-IOV should also be enabled on consumer gpus that have display ports.

            Comment


            • #26
              Originally posted by artivision View Post

              I just said that. SR-IOV should also be enabled on consumer gpus that have display ports.
              Not with the way SR-IOV works at the hardware level. The whole functionality would need to a major overhaul at the hardware level to do what you want.

              Comment

              Working...
              X