Announcement

Collapse
No announcement yet.

VirtualBox On Linux Affected By Security Vulnerability Leaking Host Data To Guests

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by linuxgeex View Post
    AFAIK the only way to get VAAPI is to use hardware passthrough. As luck would have it, that provides pretty good Vulkan/OpenGL as well ;^)
    I'm aware. I'm talking about what is likely to happen in future releases. Vulkan Video extensions make sense for a VM since it doesn't matter what the host and guest are, if they can route the API to the host drivers that support it, you're sorted.

    VA-API/virtio-video, presumably would only be supported by linux hosts, and may be less likely to get a compatible Windows (or macOS) driver for guests. But I could definitely see Vulkan Video getting more traction as a cross-platform API for each OS to implement. Then separately the equivalent of Venus/virglrenderer to communicate to the host vulkan driver.

    ---

    Passthrough isn't what I want for these VMs. I am often juggling projects and it adds friction when I've got multiple instances / windows opened that have tabs/windows for different projects I'm working on. I also would like to "pause" a project to free up resources and come back to it at a later point with everything where I left off, instead of opening up everything I had previously for that project.

    Often that's a better experience with some light 3D accel for desktop compositors. I'd rather run several guests, one for each project that I can better isolate and manage than the mess I have presently. A passthrough VM would restrict the flexibility, and can be inconvenient (no live snapshots, no suspend that frees resources, less likely portable/migratable), I might as well boot the host into another system at that point.

    My guests are all linux based (except one for testing software I write works on Windows). I haven't checked on the status, but have heard Looking Glass is getting linux guest support which is nice, there's apparently an issue with nvidia for that with DMA-BUF support though. I otherwise don't really like sacrificing a display or having to switch inputs on a display(s) to leverage the passthrough output and VNC isn't pleasant vs QEMU SDL/GTK or even SPICE.

    HW video decode isn't too important for me atm, I'm fine running that on the host. It would just be a nice-to-have for the guest to not rely on CPU decoding.

    Comment


    • #12
      Originally posted by polarathene View Post
      Possibly due to framerate differences if it's copying the frames via CPU for display as an OpenGL texture in the windowed view of the guest? Glmark2 while being twice as good on VMware (still around 5-6x worse than host) and the much better Unigine Heaven performance might hint towards displaying frames (and their size) being a performance killer more than the processing by GPU to render the actual frames.
      virgl emulates a gallium device, similar with SVGA, in practice I have found virgl to be a bit more performant then SVGA. but that alone causes a lot of overhead. I also haven't had the best luck with svga myself

      I assume that's similar to what MS did with that recent WSL article for VA-API to DX12? That'd be cool to pass VA-API from guest to host via `virtio-video`, but I would assume that we'd get similar for free with Venus support (assuming Vulkan Video extensions are available/working on host driver, nvidia has experimental support presently that MPV devs have been testing out).

      VMware might have some more weight to push for Vulkan Video support with their own Vulkan backend on Linux hosts, and as they're already mapping DX12 to Vulkan in that case, they'd probably opt for that route for video encode/decode via Vulkan Video than implementing `virtio-video` or similar? I'm happy with whatever gets better supported, but can see Vulkan Video perhaps being easier to integrate/support cross-platform?
      I doubt vulkan video via venus would be a better fit then virtio-video, but it was something I was pondering the possibility of too. vulkan video would probably be harder to implement then virtio-video would be. virtio-video present's itself as a v4l2-video device on linux, and since not tied to graphics API should be more flexible too. this would be quite important for arm devices and heterogeneous device support. IE. rendering vulkan via an amd gpu and video decode/encode on an intel igpu.

      I think we're talking about different things? I know Mesa has the VMware SVGA drivers, and that VMware open-sourced some gallium state tracker as well for D3D10. They had mentioned that adding D3D11 support would be minimal effort, but they were not going to pursue that as they were shifting to there Vulkan backend. The open-sourced drivers can be leveraged by VirtualBox and QEMU AFAIK for display, but they lack 3D accel don't they?

      I was referring to closed-source portion, that VMware has with their own product that provides the much better 3D accel performance than virgl. I understand prior to Workstation 16, they used OpenGL drivers on linux hosts, but have since switched to Vulkan on the host for rendering whatever 3D accel the guest needs? I'm not entirely sure of the relation between those proprietary components and the open-source ones, but if the guest portion is open-source, and that resulted in usable Windows drivers similar to linux guests sharing a common backend on Linux hosts through mesa (instead of whatever VMware presently is doing), perhaps the work with virglrenderer/Venus would allow for something like that?

      At least from what I'm seeing Google is using virglrenderer with their own alternatives to Venus being added to meet their needs. But maybe making that more available when there isn't much competition from QEMU and VirtualBox would be a financial incentive not to collaborate
      SVGA refers to vmware's hardware accelerated gallium driver inside of mesa. d3d10umd referrs to the d3d10 state tracker in mesa. d3d10umd is explicitly for 3d rendering. with some work it should be possible to port it to zink, or virgl, etc.

      I can't comment on how vmware handles 3d acceleration but I guess it likely handles it similarly to virgl3d instead of using opengl to translate the gallium calls, using vulkan. it's important to note as of vmware 16. vulkan is ONLY host side, and only for opengl and d3d11 and earlier. if vmware workstation has dx12 support, it will only be for nvidia gpus and/or windows hosts. or using some kind of software rendering. at the very least the newest workstation doesn't have dx12 support for me.

      I remember reading on the issue tracker for virglrenderer talking about using vulkan as the backend for virgl, but in the end it was decided using venus, then zink inside of the guest would likely be the better solution? could be remebering wrong.

      google is using virgl+venus in crosvm for their linux VMs. vulkan-cereal/GFXstream(?) is something google currently uses in their android emulator cuttlefish for vulkan acceleration, and there were talks about adding that to virtio-gpu as well. both are open source. anyone can implement either if they wanted to in qemu. evidently, the people who did want to, no longer do want to, or at least it has been put off to the side as far as we the public are concerned.

      EDIT: I remeber seeing recently a libva work being done for virtio-video so vaapi should work with it eventually.

      Comment


      • #13
        Isn't virtio-video the 3D acceleration option in QEMU? Or is it different to virgl? Virt-manager just calls it "virt".

        Comment


        • #14
          Originally posted by hamishmb View Post
          Isn't virtio-video the 3D acceleration option in QEMU? Or is it different to virgl? Virt-manager just calls it "virt".
          virtio-video is the virtual gpu, it by itself has no 3d capabilites, virgl is the 3d rendering based on opengl, venus is 3d rendering based on vulkan, both are part of virglrenderer project.

          Comment


          • #15
            Originally posted by cbxbiker61 View Post
            I have been gradually migrating from virtualbox to libvirt/qemu. Win11 seems to work fine under libvirt, even usb pass-through works for my Haltech ECU.
            I'm doing the same thing but my biggest complaint so far is folder sharing not working, is this working for you? If so, can you share your configuration?

            Comment


            • #16
              I just use SMB for file sharing, though I know why that might not be ideal.

              Comment


              • #17
                Originally posted by TheDcoder View Post

                I'm doing the same thing but my biggest complaint so far is folder sharing not working, is this working for you? If so, can you share your configuration?
                I haven't tried folder sharing at this time, I'll have to take a look at it. I do have a Samba server on the network, so that can also be an option.

                Comment


                • #18
                  Originally posted by polarathene View Post
                  ... Often that's a better experience with some light 3D accel for desktop compositors. I'd rather run several guests, one for each project that I can better isolate and manage than the mess I have presently. A passthrough VM would restrict the flexibility, and can be inconvenient (no live snapshots, no suspend that frees resources, less likely portable/migratable), I might as well boot the host into another system at that point.

                  My guests are all linux based (except one for testing software I write works on Windows). ...
                  Try XFCE4, disable its native compositing, and instead run Compton which doesn't rely on GLX for compositing. It reduces XDAMAGE calls so your VNC is radically faster / lower bandwidth, it reduces flicker when your pointer crosses window boundaries, and that overall reduces your CPU usage vs running without compositing, as well as bandwidth if you are operating over a relatively slow link like WiFi+SSH if you have headless machines in your dev environment. Remmina is a nice way to aggregate your guests, and you get lovely scaled output... you can tile outputs on a 50" display :-)

                  That won't solve Windows or MacOS problems... for those I use Chrome Remote Desktop because it outperforms MacOS's awful VNC and Windows' RDP, using x264 or WebP with hardware acceleration where available. Of course you only get hardware video accel with passthrough or native hosts, but my own easy solution to that was to collect other people's retired laptops. Bonus: no need to spin them down to save resources. Drawback: harder to throw your entire dev environment into a suitcase and take it on vacation in case you get a "please make a miracle" call lol.

                  Oh, and one other way to get VAAPI in your guests is to run them in LXC. I haven't done that in a while but I found a guide that made it possible... basically just need to bind the devices into the guest. You can run most distros in LXC. Then you use the vaapi_copy option for mpv or ffmpeg so you can output it on the virtual framebuffer.
                  Last edited by linuxgeex; 22 May 2022, 05:13 AM.

                  Comment


                  • #19
                    Originally posted by linuxgeex View Post
                    Try XFCE4, disable its native compositing, and instead run Compton which doesn't rely on GLX for compositing. It reduces XDAMAGE calls so your VNC is radically faster / lower bandwidth, it reduces flicker when your pointer crosses window boundaries, and that overall reduces your CPU usage vs running without compositing, as well as bandwidth if you are operating over a relatively slow link like WiFi+SSH if you have headless machines in your dev environment. Remmina is a nice way to aggregate your guests, and you get lovely scaled output... you can tile outputs on a 50" display :-)
                    There seems to be a misunderstanding. I know I mentioned VNC, but that was local on the same system. I was comparing it to better alternatives I had available when not using GPU passthrough. I am not interested in using XFCE4 + Compton, my VMs are Arch with KDE Plasma.

                    I don't have performance issues for these VMs, they get 3D accel through virgl with qemu, or vmware player (much better with the nvidia host GPU at 60% native, while virgl + qemu does 80% native without nvidia GPU). Plenty of performance when I only care about desktop compositing in the guest not gaming.

                    Originally posted by linuxgeex View Post
                    That won't solve Windows or MacOS problems... for those I use Chrome Remote Desktop because it outperforms MacOS's awful VNC and Windows' RDP, using x264 or WebP with hardware acceleration where available. Of course you only get hardware video accel with passthrough or native hosts
                    There's quite a few options for remote displays, this isn't something I'm concerned about. SPICE does have hardware accel btw, whatever gstreamer on the host provides it can leverage IIRC. My requirements is that I can bring up several guests with a smooth composited experience (lacks leveraging high refresh-rate without patching qemu apparently), passthrough provides one grunty VM for a GPU, that doesn't meet my need of being able to bring up several guests though.

                    VM guests without passthrough are likely to more broadly support HW video accel in future as was discussed earlier. Which implementation becomes more prevalent is unclear atm. I have heard that Intel GVT-g (vGPU) provides access to HW encode/decode to the guest which is neat, I haven't explored that yet personally.

                    Originally posted by linuxgeex View Post
                    Oh, and one other way to get VAAPI in your guests is to run them in LXC. I haven't done that in a while but I found a guide that made it possible... basically just need to bind the devices into the guest. You can run most distros in LXC. Then you use the vaapi_copy option for mpv or ffmpeg so you can output it on the virtual framebuffer.
                    I have heard of DistroBox (uses Podman) for containers that are more integrated with the host system and support running graphical apps as if they were native on the host. Haven't looked into those if they provide the other features I'm interested in such as suspend/snapshots.

                    I don't know much about LXC personally. The binding part is taking exclusive access of the GPU? That's not what I'd want if that's the case.

                    ---

                    Thanks for taking the time to share your knowledge and experience though

                    Comment


                    • #20
                      Originally posted by polarathene View Post
                      There's quite a few options for remote displays, this isn't something I'm concerned about. SPICE does have hardware accel btw, whatever gstreamer on the host provides it can leverage IIRC. My requirements is that I can bring up several guests with a smooth composited experience (lacks leveraging high refresh-rate without patching qemu apparently), passthrough provides one grunty VM for a GPU, that doesn't meet my need of being able to bring up several guests though.

                      VM guests without passthrough are likely to more broadly support HW video accel in future as was discussed earlier. Which implementation becomes more prevalent is unclear atm. I have heard that Intel GVT-g (vGPU) provides access to HW encode/decode to the guest which is neat, I haven't explored that yet personally.
                      Spice isn't hw accelerated via the host, it is via guest. however the way spice works means it doesn't need to be compressed unless across a lan connection. so connections on the same PC often won't experience any issues whatsoever. I was working on getting vaapi acceleration working on spice via host, however kept running into a bug which caused a gpu driver crash which was annoying enough to make me entirely give up on the project

                      Comment

                      Working...
                      X