Originally posted by agd5f
View Post
RadeonSI Lands Bits In Mesa 20.2 For Better Dealing With GPU Virtualization
Collapse
X
-
Originally posted by boxie View PostI ask because the article says ... which implies that it works *without* SR-IOV - and hence my question.
With agd5f mentioning on the other page about using multiple render nodes it got me wondering with the code that this article is about if it was now possible to share a GPU between Host/Guest (with a guest identifying the card properly and not a virtualised card).
if the article should say "requires both SR-IOV AND using amdgpu.mcbp=1" it would make much more sense given your answers
It's only when the discussion moved from MCBP to sharing a GPU between host and guest that the requirement for SR-IOV or something like Virgil came in.
In case it helps, MCBP is the graphics equivalent of Compute Wave Save/Restore, which we use in the HSA/ROCm stack to allow task switching without having to wait for long-running waves to complete, since waves can run for hours or days.
https://lists.freedesktop.org/archiv...er/016069.htmlLast edited by bridgman; 23 July 2020, 02:13 PM.Test signature
Comment
-
-
Originally posted by agd5f View Post
There is no way to share a GPU between the host and a guest using standard driver stacks whether you are using SR-IOV or not. You'd need some sort of para-virtualized solution like virgil. Mid command buffer preemption (mcbp) has nothing to do with virtualization per se. It happens to be used by SR-IOV indirectly, but you can use it on bare metal as well to pre-empt work on the GPU.
Comment
-
-
Originally posted by artivision View PostHere we disagree. If your gpus is identified as two gpus then there isn't a reason for virgil or specialized drivers, they can eat two api instances. Second there is display support for those systems, that you referring is that usually server cards don't have display outputs at all, but that's different.
Comment
-
Comment