RadeonSI Lands Bits In Mesa 20.2 For Better Dealing With GPU Virtualization

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • agd5f
    replied
    Originally posted by artivision View Post

    I just said that. SR-IOV should also be enabled on consumer gpus that have display ports.
    Not with the way SR-IOV works at the hardware level. The whole functionality would need to a major overhaul at the hardware level to do what you want.

    Leave a comment:


  • artivision
    replied
    Originally posted by agd5f View Post

    There are no SR-IOV capable cards with display support. Only the the gfx/compute and multi-media blocks are virtualized.
    I just said that. SR-IOV should also be enabled on consumer gpus that have display ports.

    Leave a comment:


  • agd5f
    replied
    Originally posted by artivision View Post
    Here we disagree. If your gpus is identified as two gpus then there isn't a reason for virgil or specialized drivers, they can eat two api instances. Second there is display support for those systems, that you referring is that usually server cards don't have display outputs at all, but that's different.
    There are no SR-IOV capable cards with display support. Only the the gfx/compute and multi-media blocks are virtualized.

    Leave a comment:


  • artivision
    replied
    Originally posted by agd5f View Post

    There is no way to share a GPU between the host and a guest using standard driver stacks whether you are using SR-IOV or not. You'd need some sort of para-virtualized solution like virgil. Mid command buffer preemption (mcbp) has nothing to do with virtualization per se. It happens to be used by SR-IOV indirectly, but you can use it on bare metal as well to pre-empt work on the GPU.
    Here we disagree. If your gpus is identified as two gpus then there isn't a reason for virgil or specialized drivers, they can eat two api instances. Second there is display support for those systems, that you referring is that usually server cards don't have display outputs at all, but that's different.

    Leave a comment:


  • bridgman
    replied
    Originally posted by boxie View Post
    I ask because the article says ... which implies that it works *without* SR-IOV - and hence my question.

    With agd5f mentioning on the other page about using multiple render nodes it got me wondering with the code that this article is about if it was now possible to share a GPU between Host/Guest (with a guest identifying the card properly and not a virtualised card).

    if the article should say "requires both SR-IOV AND using amdgpu.mcbp=1" it would make much more sense given your answers
    For clarity, the feature described by the article (Mid Command Buffer Pre-emption or MCBP) can be used under virtualization or on bare metal, ie SR-IOV is not required. The key point though is that MCBP does not enable sharing on its own, it just provides a way to free up the GPU more quickly to improve quality of service when sharing.

    It's only when the discussion moved from MCBP to sharing a GPU between host and guest that the requirement for SR-IOV or something like Virgil came in.

    In case it helps, MCBP is the graphics equivalent of Compute Wave Save/Restore, which we use in the HSA/ROCm stack to allow task switching without having to wait for long-running waves to complete, since waves can run for hours or days.

    https://lists.freedesktop.org/archiv...er/016069.html
    Last edited by bridgman; 23 July 2020, 02:13 PM.

    Leave a comment:


  • MrCooper
    replied
    Originally posted by agd5f View Post
    There just isn't a use case for it on bare metal at the moment. In theory the scheduler in the kernel driver could use it to pre-empt lower priority tasks.
    It should be useful for preventing greedy clients from starving the display server (at least with GPUs which don't have a separate high priority GFX ring for that).

    Leave a comment:


  • agd5f
    replied
    Originally posted by boxie View Post

    and bridgman

    Thank you both for taking the time to answer my question. I ask because the article says

    which implies that it works *without* SR-IOV - and hence my question.

    With agd5f mentioning on the other page about using multiple render nodes it got me wondering with the code that this article is about if it was now possible to share a GPU between Host/Guest (with a guest identifying the card properly and not a virtualised card).

    if the article should say "requires both SR-IOV AND using amdgpu.mcbp=1" it would make much more sense given your answers

    The end goal would be able to spin up a VM and share the GPU between Host/Guest so that games/software that don't like/run well under wine could easily be shared (and be identified correctly in the VM)
    There is no way to share a GPU between the host and a guest using standard driver stacks whether you are using SR-IOV or not. You'd need some sort of para-virtualized solution like virgil. Mid command buffer preemption (mcbp) has nothing to do with virtualization per se. It happens to be used by SR-IOV indirectly, but you can use it on bare metal as well to pre-empt work on the GPU.

    Leave a comment:


  • Danny3
    replied
    Originally posted by geearf View Post

    When was that?

    I know of SLI and CrossFire, but I have never seen them be that common.
    Indeed!
    I am a gamer too, but I have never bought 2 GPUs for this.
    I would rather buy a GPU 2x more powerful than buying 2 GPUs.
    I don't think 2 GPUs can ever match the power efficiency and noise of one, plus it will make small cases too crowded.

    Maybe is advantageous for compute, if AMD fixes the damn ROCm installation on current distributions.

    Leave a comment:


  • geearf
    replied
    Originally posted by caligula View Post
    Correct me if I'm wrong but dual GPUs used to be quite common for gaming.
    When was that?

    I know of SLI and CrossFire, but I have never seen them be that common.

    Leave a comment:


  • Jabberwocky
    replied
    This reminds me of sharing the family computer between siblings. Our mid-command buffer preemption was cutting power to the machine, our accuracy was down to the second.

    Does mirroring registers introduce any performance issues? I'm making the assumption that memory is slower than the registers.

    I'm also curious if it is possible to support GFX8/Polaris? Plans to do so or time-frames doesn't matter, just wondering if there are any "show stoppers".

    Leave a comment:

Working...
X