Originally posted by mikus
View Post
For most apps, traditional VM setups should be sufficient, you can get fullscreen and multi-screen going where each screen is a window until you set it to be fullscreen, that should work around the TV display state issues that cause you problems with windows being shuffled around, since they'll all still be contained in that specific window like a group.
For any case where you need more native like graphic performance, there is VFIO(see r/vfio for a community around this that's quite helpful, Arch Wiki has plenty of juicy info too on getting setup). This isn't just for graphics though, you can get near native/host performance in your VM guests for practically any part of the system. Generally this is slicing up the system resources however as you pass them to the guest to use exclusively(eg you can assign cores/threads just for that VM, disks, GPU, memory allocation etc), sort of like running multiple host OS in parallel.
This does have a drawback that the VMs lose certain state features like being able to save the VM state/snapshot, it ends up like a host system where you can suspend/shutdown/hibernate it instead only(depending on the resources). If you pass through a GPU(Intel iGPU can have it's resources split across VMs, whereas a dGPU like nvidia would be a full passthrough), then instead of virtual displays, you get direct display output from the GPU like a host system, that display is no longer shared with anything else(unless you have multiple inputs to it and a way to toggle/cycle through them).
Splitting up your work this way would allow you to be a bit more flexible, so that if there is a compositor issue, it's more localized to a smaller scope that doesn't affect the rest of your system, and you might be in a better position to fix it more easily(such as a reboot of the VM). It'd be useful for me as well, I definitely don't need all my browser windows/tabs and apps open 24/7 for variety of projects munged together, so it'd provide better separation of my projects...but I've yet to actually sort this out. It's better these days as the issues I had when first looking into this kind of setup was related to file sharing/access being a bit annoying, now there are things like virtio-fs that I think better address this.
It's very common for Windows OS to be run in r/vfio, there's a software called Looking Glass which allows for using the GPU passthrough approach but displaying those screens in a resizable/movable window like a traditional VM, yet more efficient than VNC as instead of network screen capture it writes to RAM from the guest and the host(or another guest) reads from that same shared memory to display it. Only captures for Windows though I think, no Linux guest support. You can also combo that with display dummy plugs, these use a display output and emulate a connected display of whatever resolution/framerate, then you can have have your host connected displays share the same display device physically as your guest VM, but powered by two different OS and GPU.
One other benefit of all this, is with Intel at least, there is a live migration feature that can send one VM state to another machine, eg from desktop to laptop or vice versa. So long as the resources/requirements are sufficient for migrating the VM across the two systems. The base image can be on both systems, and you're just transferring the state, so it's not necessarily as big of a transfer as it sounds once setup.
Originally posted by mikus
View Post
Originally posted by mikus
View Post
Can also happen if updating the system kernel I think, maybe GPU drivers too(nvidia). I know that my system won't restart/shutdown via GUI methods after some updates due to this(the old kernel that is running has had it's modules deleted, and it can't find the nvidia driver or whatever, which usually causes other problems for anything that wants to use the GPU). While sketchfab wouldn't work, some other WebGL demo I tried last night on this system lagged horrendously, all my CPU was on full load and it was struggling to render a few frames a second, I think it will handle that much better after a restart.
My laptop with only 2 cores i3 CPU and 4GB RAM, no dGPU only the Intel iGPU can boot with ~500MB of RAM in use, and 0-1% CPU idle, desktop is probably similar, but I know it gets worse over time, especially when kwin fails and compositing takes a dive as a result.
Originally posted by mikus
View Post
Do note that on laptops, AMD is only about to get the power saving feature support PSR for laptop displays(provided your display uses eDP 1.3 or higher iirc, I bought a laptop end of last year which while a new 2019Q3 release, used a 2017 manufactured display and eDP 1.2 which was from 2011? eDP 1.3 came out a year later, so I lucked out thinking the 10th gen Intel CPU and WiFI AX was new enough that surely they'd not skimp on display tech that old considering the benefits). AMD gets this support with it's drivers coming in the 5.7 kernel afaik, intel has had it for a long time(not relevant to nvidia as the display for laptops is usually handled by intel and nvidia does some interaction with intels framebuffer to route it's output afaik).
AMD might turn out better for you, just fair warning that it's not always great despite what the community tends to imply.
Originally posted by mikus
View Post
A good example that is simpler to demonstrate that is wifi devices, despite all their other variables, they can claim Wifi 802.11N support, an astounding 150Mbps(less than 20MB/sec), even if all other variables were perfect for utilizing that bandwidth limit, the product only has to claim 802.11n support, not actually deliver that performance, similar can be seen with disk drives with poor performance but marketing themselves as SATA 3 with 6Gbps(~600MB/sec, less with SATA overhead, less again with USB overhead if an external).
Leave a comment: