Originally posted by Danny3
View Post
Even just running a 1-3 VMs of any kind e.g. linux guest / linux host, linux / windows, whatever is a use case.
Even the most ordinary GUI based desktop / productivity applications these days depend heavily on GPU use
and multiple high resolution (4k+ etc.) monitors are very normal configurations for personal / productivity desktops.
And then lots of applications depend to some extent on GPU compute either alone or mixed with graphics operations
e.g. ordinary basic ML models, OCR, photo / image editing filters, video encode / decode / conversion, noise / background
removal for audio / webcam, basic personal language translation, ML based grammar / language / composition analysis / assistance, etc. etc.
Pretty much every single consumer oriented CPU and system chipset supports MMU, virtualization, per-process / user isolation,
CPU integrated abilities to efficiently share the computer resources between many different programs / tasks / users.
So clearly AMD, Intel, ARM CPU / system chipset makers already see the need for HW backed virtualization & multi-tasking isolation
security even in the low end of consumer compute HW.
Yet for GPUs? NOTHING USEFUL FOR VIRTUALIZATION AT ALL. AFAICT the core support needed for SR-IOV is only to start exposing
some dozens of registers for the functions in PCIE capabilities structures and handling them in very ordinary ways at the driver level.
Simpler even than the MMU and virtualization support already in the main processor which apparently you've already long ago decided to agree is
necessary for mass consumer market use cases, so what possible cognitive dissonance could result in integrating a boat load of complex virtualization supporting
HW into every part of the consumer computing architecture EXCEPT the GPU where arguably it's extremely critically needed and is the biggest
sore spot for consumer computing today to have no commensurate multi-tasking / multi-VM support for it at all.
On the one hand GPU makers evidently seem to encourage them as the solution for graphics, video, image processing, inferencing ML / NPUs,
and yet they offer basically crippled and defective-by-design architectures when they already routinely support sane virtualization support
for all of their other GPU / CPU / chipset product lines.
AFAICT the HW level capability already EXISTS and is DISABLED so it isn't even a question of adding ASIC design capabilities, it's a question
of not punishing and abusing your users for no sane reason.
Comment