Announcement

Collapse
No announcement yet.

VirtIO Improvements Ready For Linux 6.10

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • VirtIO Improvements Ready For Linux 6.10

    Phoronix: VirtIO Improvements Ready For Linux 6.10

    All of the VirtIO updates are now ready for the Linux 6.10 merge window that is closing this weekend...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    May be slightly off-topic, but I have a "previously loved" 8th Gen i7-based Dell Precision tower at work that I currently run Windows on. In its day, would have been pretty high-end. Right now 32GB of RAM and has an Nvidia dGPU card (in addition to the Intel iGPU.)

    I am thinking of finally doing something I have wanted to do for a while. I want to install Linux as the host OS, likely a trimmed down Fedora installation. I can drive this off the Intel iGPU. Then I can install Windows 10 as a guest, likely two installs as one I'll set up with a local account and the other set up bound to our AD for tasks I need occasionally and keep slim with just the tools I need for this.

    For the graphics part, I don't think I want to mess with VirtIO-GPU stuff, I can do passthrough but also reading up about vGPU which if my Nvidia card supports, looks like I can split that between two Windows guests. Where I really need to do my homework is figuring out if I just run from KVM or do I use QEMU? Or do I dig into libvirt and virsh stuff? I tend to get myself a cursory understanding of things until I end up really needing to dig in, and looking like time to do so.

    Whether in Sway or Cosmic (I am sure I'll install both), would be really cool use the Linux host for most of my stuff, but then be able to launch Windows as virtualized guests full-screen in their down virtual desktop on the host WM/DE when needed. Been thinking about this for a long time, time to get it done I think!

    Comment


    • #3
      Originally posted by ehansin View Post
      May be slightly off-topic, but I have a "previously loved" 8th Gen i7-based Dell Precision tower at work that I currently run Windows on. In its day, would have been pretty high-end. Right now 32GB of RAM and has an Nvidia dGPU card (in addition to the Intel iGPU.)

      I am thinking of finally doing something I have wanted to do for a while. I want to install Linux as the host OS, likely a trimmed down Fedora installation. I can drive this off the Intel iGPU. Then I can install Windows 10 as a guest, likely two installs as one I'll set up with a local account and the other set up bound to our AD for tasks I need occasionally and keep slim with just the tools I need for this.

      For the graphics part, I don't think I want to mess with VirtIO-GPU stuff, I can do passthrough but also reading up about vGPU which if my Nvidia card supports, looks like I can split that between two Windows guests. Where I really need to do my homework is figuring out if I just run from KVM or do I use QEMU? Or do I dig into libvirt and virsh stuff? I tend to get myself a cursory understanding of things until I end up really needing to dig in, and looking like time to do so.

      Whether in Sway or Cosmic (I am sure I'll install both), would be really cool use the Linux host for most of my stuff, but then be able to launch Windows as virtualized guests full-screen in their down virtual desktop on the host WM/DE when needed. Been thinking about this for a long time, time to get it done I think!
      KVM is the Linux kernel. QEMU uses it to have accelerated guests. Libvirt is an iterface to KVM/QEMU/LXC, and virsh is just a command line client for it.

      Virt-Manager is a Graphical client to set up QEMU machines that is the equivalent to virsh, being user facing.

      There are other GUIs for QEMU, but the heavy lifting is QEMU and KVM.

      Comment


      • #4
        VirtIO-Net is now fully supported in VDUSE. VDUSE is the software-defined data path based on vDPA and stands for "vDPAU device in user-space."
        "vDPAU"? Isn't this very confusing, since NVIDIA's video acceleration API is also called VDPAU, the only difference being that the V is capitalized there as well? I know that VDPAU lost out to VA-API, but the older NVIDIA API is still in use.

        I can imagine clashes with scarce three-letter acronyms, but in this case, it seems needlessly confusing.

        Comment


        • #5
          Originally posted by dragorth View Post

          KVM is the Linux kernel. QEMU uses it to have accelerated guests. Libvirt is an iterface to KVM/QEMU/LXC, and virsh is just a command line client for it.

          Virt-Manager is a Graphical client to set up QEMU machines that is the equivalent to virsh, being user facing.

          There are other GUIs for QEMU, but the heavy lifting is QEMU and KVM.
          Thank out, much appreciated! I kind of knew, but I also kind if didn't. I might dive into virsh and use to drive QEMU (which will drive KVM - at least I think so!) I can learn a few new things at once. I have been using UTM (graphical client) on a Mac laptop a lot lately, which sits on top of QEMU. I can set things up and see all the options that get set, so pretty cool as well.

          Comment


          • #6
            virtio-net is horrible for performance..
            Has anyone had same experience as me?

            Kernel 6.9.1 on Arch Linux with Podman 5.0.3.
            Speed between Container - Container:
            Code:
            $ iperf3 --client 192.168.50.226 --port 5002 [ 5] local 192.168.50.226 port 53860 connected to 192.168.50.226 port 5002 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 11.8 GBytes 102 Gbits/sec 0 512 KBytes [ 5] 1.00-2.00 sec 11.2 GBytes 96.4 Gbits/sec 0 512 KBytes [ 5] 2.00-3.00 sec 11.8 GBytes 102 Gbits/sec 0 384 KBytes [ 5] 3.00-4.00 sec 11.8 GBytes 102 Gbits/sec 0 512 KBytes [ 5] 4.00-5.00 sec 11.8 GBytes 101 Gbits/sec 0 512 KBytes [ 5] 5.00-6.00 sec 11.9 GBytes 102 Gbits/sec 0 512 KBytes [ 5] 6.00-7.00 sec 12.0 GBytes 103 Gbits/sec 0 512 KBytes [ 5] 7.00-8.00 sec 10.9 GBytes 93.3 Gbits/sec 0 512 KBytes [ 5] 8.00-9.00 sec 10.9 GBytes 93.4 Gbits/sec 0 512 KBytes [ 5] 9.00-10.00 sec 12.0 GBytes 103 Gbits/sec 0 256 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 119 GBytes 102 Gbits/sec 0 sender [ 5] 0.00-10.00 sec 119 GBytes 102 Gbits/sec receiver
            QEMU 9.0.0 + LIBVIRT 10.3.0 with bridged network ( -netdev 'type=bridge,br=virbr0,id=nic' -device 'driver=virtio-net-pci,netdev=nic'` ).
            Speed between Host - VM:
            Code:
            $ iperf3 --client 192.168.122.79 --port 5002 [ 5] local 192.168.122.1 port 43758 connected to 192.168.122.79 port 5002 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 2.54 GBytes 21.8 Gbits/sec 0 208 KBytes [ 5] 1.00-2.00 sec 2.46 GBytes 21.2 Gbits/sec 0 211 KBytes [ 5] 2.00-3.00 sec 2.44 GBytes 21.0 Gbits/sec 0 223 KBytes [ 5] 3.00-4.00 sec 2.53 GBytes 21.7 Gbits/sec 0 236 KBytes [ 5] 4.00-5.00 sec 2.40 GBytes 20.7 Gbits/sec 0 191 KBytes [ 5] 5.00-6.00 sec 2.42 GBytes 20.8 Gbits/sec 0 211 KBytes [ 5] 6.00-7.00 sec 2.69 GBytes 23.1 Gbits/sec 0 201 KBytes [ 5] 7.00-8.00 sec 2.57 GBytes 22.0 Gbits/sec 0 232 KBytes [ 5] 8.00-9.00 sec 2.70 GBytes 23.2 Gbits/sec 0 226 KBytes [ 5] 9.00-10.00 sec 2.95 GBytes 25.3 Gbits/sec 0 238 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 25.7 GBytes 22.1 Gbits/sec 0 sender [ 5] 0.00-10.00 sec 25.7 GBytes 22.1 Gbits/sec receiver
            Baremetal: 102 Gbits/sec
            virtio-net: 22.1 Gbits/sec
            Last edited by Kjell; 04 June 2024, 03:13 PM.

            Comment


            • #7
              Originally posted by Kjell View Post
              virtio-net is horrible for performance..
              Has anyone had same experience as me?

              Kernel 6.9.1 on Arch Linux with Podman 5.0.3.
              Speed between between Container - Container:
              Code:
              $ iperf3 --client 192.168.50.226 --port 5002 [ 5] local 192.168.50.226 port 53860 connected to 192.168.50.226 port 5002 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 11.8 GBytes 102 Gbits/sec 0 512 KBytes [ 5] 1.00-2.00 sec 11.2 GBytes 96.4 Gbits/sec 0 512 KBytes [ 5] 2.00-3.00 sec 11.8 GBytes 102 Gbits/sec 0 384 KBytes [ 5] 3.00-4.00 sec 11.8 GBytes 102 Gbits/sec 0 512 KBytes [ 5] 4.00-5.00 sec 11.8 GBytes 101 Gbits/sec 0 512 KBytes [ 5] 5.00-6.00 sec 11.9 GBytes 102 Gbits/sec 0 512 KBytes [ 5] 6.00-7.00 sec 12.0 GBytes 103 Gbits/sec 0 512 KBytes [ 5] 7.00-8.00 sec 10.9 GBytes 93.3 Gbits/sec 0 512 KBytes [ 5] 8.00-9.00 sec 10.9 GBytes 93.4 Gbits/sec 0 512 KBytes [ 5] 9.00-10.00 sec 12.0 GBytes 103 Gbits/sec 0 256 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 119 GBytes 102 Gbits/sec 0 sender [ 5] 0.00-10.00 sec 119 GBytes 102 Gbits/sec receiver
              QEMU 9.0.0 + LIBVIRT 10.3.0 with bridged network ( -netdev 'type=bridge,br=virbr0,id=nic' -device 'driver=virtio-net-pci,netdev=nic'` ).
              Speed between Host - VM:
              Code:
              $ iperf3 --client 192.168.122.79 --port 5002 [ 5] local 192.168.122.1 port 43758 connected to 192.168.122.79 port 5002 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 2.54 GBytes 21.8 Gbits/sec 0 208 KBytes [ 5] 1.00-2.00 sec 2.46 GBytes 21.2 Gbits/sec 0 211 KBytes [ 5] 2.00-3.00 sec 2.44 GBytes 21.0 Gbits/sec 0 223 KBytes [ 5] 3.00-4.00 sec 2.53 GBytes 21.7 Gbits/sec 0 236 KBytes [ 5] 4.00-5.00 sec 2.40 GBytes 20.7 Gbits/sec 0 191 KBytes [ 5] 5.00-6.00 sec 2.42 GBytes 20.8 Gbits/sec 0 211 KBytes [ 5] 6.00-7.00 sec 2.69 GBytes 23.1 Gbits/sec 0 201 KBytes [ 5] 7.00-8.00 sec 2.57 GBytes 22.0 Gbits/sec 0 232 KBytes [ 5] 8.00-9.00 sec 2.70 GBytes 23.2 Gbits/sec 0 226 KBytes [ 5] 9.00-10.00 sec 2.95 GBytes 25.3 Gbits/sec 0 238 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 25.7 GBytes 22.1 Gbits/sec 0 sender [ 5] 0.00-10.00 sec 25.7 GBytes 22.1 Gbits/sec receiver
              Baremetal: 102 Gbits/sec
              virtio-net: 22.1 Gbits/sec
              Bare metal is always faster than emulated. Did you compare virtio against e1000? Did you enable multiple queues for virtio-net?

              ​​​​​
              Code:
              <interface type="bridge">
                <mac address="10:12:34:45:67:89"/>
                <model type="virtio"/>
                <driver name="vhost" txmode="iothread" ioeventfd="on" event_idx="on" queues="4" rx_queue_size="1024" tx_queue_size="256" rss="on" iommu="on" ats="on" packed="on">
                  <host ecn="on" mrg_rxbuf="on"/>
                </driver>
                <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
              </interface>
              Last edited by S.Pam; 23 May 2024, 02:24 PM.

              Comment


              • #8
                Originally posted by ehansin View Post

                Thank out, much appreciated! I kind of knew, but I also kind if didn't. I might dive into virsh and use to drive QEMU (which will drive KVM - at least I think so!) I can learn a few new things at once. I have been using UTM (graphical client) on a Mac laptop a lot lately, which sits on top of QEMU. I can set things up and see all the options that get set, so pretty cool as well.
                UTM is a nice interface over QEMU and APPLE's HyperVisor implementation which is the equivalent to KVM. UTM is analogous to Virt-Manager. virsh is a command-line interface for it.

                I haven't used UTM yet, but it looks much easier than virsh. It has lots of presets that make it easy to get to common cases. I am still testing out Virt-Manager, just set it up to connect to my unRAID box. So can't really be of much help.

                Back to the subject of this thread, I am all for speed improvements in the QEMU/KVM stack, especially for the aforementioned unRAID server. It will be a minute before I get this there, however.

                The 20+ Gbit/s is a great speed for something emulated on the cpu. Remember, this has to go through memory boundaries set up in VMs, so it interfaces to the system differently than docker that is basically an address handle at best. There is no emulation of hardware going on, it has the luxury of doing non-standard things before it gets to the outside world and Ethernet adapters, but the VM has to be standard and work for the guest OS.

                Comment


                • #9
                  dragorth No need to respond, I as well will try and just post and let topic at hand get back at it. Just installed libvirt and virt-manager on a Fedora install I actually just did this morning (wanted to compare the "custom", "minimal", and "server" install options - the first two appear to be the same these days.) And did so with UTM on a kind of low-spec Mac laptop. I used to use VirtualBox (still do on Windows) and like UTM way better. So if you are on a Mac (I'm on them all), I give it a thumbs up (easy homebrew install by the way.)

                  On the UTM note, running Sway works fine, much better than what I could get working under VirtualBox. So good VM for me to play around with virsh and virt-manager. Not sure I can virtualize anything on top of this given low-specs and turtles all the down - but can play around with these. Now back to topic at hand!

                  Comment


                  • #10
                    Originally posted by Kjell View Post
                    virtio-net is horrible for performance..
                    Has anyone had same experience as me?

                    Kernel 6.9.1 on Arch Linux with Podman 5.0.3.
                    Speed between between Container - Container:
                    Code:
                    $ iperf3 --client 192.168.50.226 --port 5002 [ 5] local 192.168.50.226 port 53860 connected to 192.168.50.226 port 5002 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 11.8 GBytes 102 Gbits/sec 0 512 KBytes [ 5] 1.00-2.00 sec 11.2 GBytes 96.4 Gbits/sec 0 512 KBytes [ 5] 2.00-3.00 sec 11.8 GBytes 102 Gbits/sec 0 384 KBytes [ 5] 3.00-4.00 sec 11.8 GBytes 102 Gbits/sec 0 512 KBytes [ 5] 4.00-5.00 sec 11.8 GBytes 101 Gbits/sec 0 512 KBytes [ 5] 5.00-6.00 sec 11.9 GBytes 102 Gbits/sec 0 512 KBytes [ 5] 6.00-7.00 sec 12.0 GBytes 103 Gbits/sec 0 512 KBytes [ 5] 7.00-8.00 sec 10.9 GBytes 93.3 Gbits/sec 0 512 KBytes [ 5] 8.00-9.00 sec 10.9 GBytes 93.4 Gbits/sec 0 512 KBytes [ 5] 9.00-10.00 sec 12.0 GBytes 103 Gbits/sec 0 256 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 119 GBytes 102 Gbits/sec 0 sender [ 5] 0.00-10.00 sec 119 GBytes 102 Gbits/sec receiver
                    QEMU 9.0.0 + LIBVIRT 10.3.0 with bridged network ( -netdev 'type=bridge,br=virbr0,id=nic' -device 'driver=virtio-net-pci,netdev=nic'` ).
                    Speed between Host - VM:
                    Code:
                    $ iperf3 --client 192.168.122.79 --port 5002 [ 5] local 192.168.122.1 port 43758 connected to 192.168.122.79 port 5002 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 2.54 GBytes 21.8 Gbits/sec 0 208 KBytes [ 5] 1.00-2.00 sec 2.46 GBytes 21.2 Gbits/sec 0 211 KBytes [ 5] 2.00-3.00 sec 2.44 GBytes 21.0 Gbits/sec 0 223 KBytes [ 5] 3.00-4.00 sec 2.53 GBytes 21.7 Gbits/sec 0 236 KBytes [ 5] 4.00-5.00 sec 2.40 GBytes 20.7 Gbits/sec 0 191 KBytes [ 5] 5.00-6.00 sec 2.42 GBytes 20.8 Gbits/sec 0 211 KBytes [ 5] 6.00-7.00 sec 2.69 GBytes 23.1 Gbits/sec 0 201 KBytes [ 5] 7.00-8.00 sec 2.57 GBytes 22.0 Gbits/sec 0 232 KBytes [ 5] 8.00-9.00 sec 2.70 GBytes 23.2 Gbits/sec 0 226 KBytes [ 5] 9.00-10.00 sec 2.95 GBytes 25.3 Gbits/sec 0 238 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 25.7 GBytes 22.1 Gbits/sec 0 sender [ 5] 0.00-10.00 sec 25.7 GBytes 22.1 Gbits/sec receiver
                    Baremetal: 102 Gbits/sec
                    virtio-net: 22.1 Gbits/sec
                    What are your host specs? I have an OPNsense VM running on an i7 [email protected] and I have pinned it on core 2-3 with multithreading, so 2c/4t.
                    I've tested the virtual bridge performance between host and VM and I only get 3.5Gbits/sec with virtIO-net.

                    Comment

                    Working...
                    X