Announcement

Collapse
No announcement yet.

Intel Haswell Linux Virtualization: KVM vs. Xen vs. VirtualBox

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by AdamW View Post
    I use a KVM for exactly that and it's fine. But I don't do any 3D stuff: if you do 3D stuff and you need passthrough acceleration VBox is really the only option you have ATM. (FWIW, the Fedora kernel and virt devs are unified in viewing VBox as a terrible, terrible piece of software; they obviously think KVM is the best thing ever, but it's not just competition-syndrome, they think VMware and Xen are perfectly fine code, it's just VBox they think is really terrible).
    What about snapshots with KVM? And do you use gnome boxes?

    Comment


    • #22
      Originally posted by n3wu53r View Post
      What about snapshots with KVM? And do you use gnome boxes?
      You can do snapshots fine, though it's not a feature I use. I don't use Boxes, no, I use virt-manager, it suits my workflow better and I'm more used to it.

      Comment


      • #23
        Originally posted by AdamW View Post
        I use a KVM for exactly that and it's fine. But I don't do any 3D stuff: if you do 3D stuff and you need passthrough acceleration VBox is really the only option you have ATM. (FWIW, the Fedora kernel and virt devs are unified in viewing VBox as a terrible, terrible piece of software; they obviously think KVM is the best thing ever, but it's not just competition-syndrome, they think VMware and Xen are perfectly fine code, it's just VBox they think is really terrible).
        If really good 3D performance is needed, PCI/VGA passthrough is the way to go, I?m not up to date on how KVM is doing in that area, but I use Xen 4.1 with PCI passthrough of a Radeon 7970 to a Windows 7 VM which I use only for games. The performance is really good, running BF3 just like the physical machine I had before. Also ran some 3D mark benchmarks and the performance loss was negligible, around 2%.

        Comment


        • #24
          Originally posted by AdamW View Post
          I use a KVM for exactly that and it's fine. But I don't do any 3D stuff: if you do 3D stuff and you need passthrough acceleration VBox is really the only option you have ATM. (FWIW, the Fedora kernel and virt devs are unified in viewing VBox as a terrible, terrible piece of software; they obviously think KVM is the best thing ever, but it's not just competition-syndrome, they think VMware and Xen are perfectly fine code, it's just VBox they think is really terrible).
          Assuming you have vt-d/iommu hardware you can use kvm with passthrough. I'm sure you know this, but the person you're responding to may not.
          I'd be curious to know how vbox can perform passthrough without the transparent redirection provided by the hardware.

          Comment


          • #25
            Originally posted by chrisb View Post
            Isn't there a large performance hit with qcow2 though? Eg http://michaelmk.blogspot.co.uk/2012...mark.html?m=1#! shows some benchmarks at only 10% of LVM performance.
            Yes it can be slow. I find that in real world usage it's not bad at all, especially when using virtio. It also allows thin provisioning, which is useful in the lab. For performance sensitive setups I usually do some type of raw block device, but qcow2 has it's place. One think KVM has an advantage in is allowing you do do just about any disk setup you want.

            Comment


            • #26
              Originally posted by liam View Post
              Assuming you have vt-d/iommu hardware you can use kvm with passthrough. I'm sure you know this, but the person you're responding to may not.
              I'd be curious to know how vbox can perform passthrough without the transparent redirection provided by the hardware.
              VirtualBox (and VMWare) use a special driver on the guest and hand the result to the host. No hardware involved.

              Using PCI Passthrough would require a dedicated graphics card, which isn't what most people would want.

              Comment


              • #27
                Hypervisors

                Few points:
                • KVM is a kernel module, it loads on a classic Linux kernel and can be unloaded, it then provides with the help of qemu emulated devices and VT acceleration a fully virtualised model: Your guest OS is fooled and loads and install onto emulated hardware presented in software (ya follow?), will it be windows, Linux or else
                • Xen uses 2 models:
                • a fully virtualised one with the help of VT extensions and qemu device back end, similarly to KVM, allowing it to run Windows (this is called Xen HVM or Hardware Assisted Virtualisation) or other OSes
                • a para-virtualised one, or PV model, that runs modified / patched guests (nicely enough, the same Xen patch for the host and the VM) - This needs a Xen patched guest, so usually limited to Open Source OSes. The acceleration comes from a front-end/back end driver model and a special hypercall ABI
                • Virtual Box uses Binary Translation + VT extension and even if useful and painless to run on your desktop, its performance in high I/O and CPU bound tasks should be below the rest.


                Now, in my real world experience deploying the first 2 in heavy workloads, not only PV Linux guests on Xen (second mode, paravirtualized) are much much faster than the HVM Linux guest (especially for I/O), they have also way less contention and are way more snappy if you were to saturate the guest with several VMs (not just run one) I have seen Xen dom0 hosts supporting amounts of heavily loaded VMs (again in paravirt mode) while maintaining very good performance and responsiveness, not so much HVM. To his credit, Michael has indicated the Xen mode for the benchmark: HVM, there fore I will say the results are not representative of the best performance possible for running a Linux Virtual Machine on a Linux host. If he picked a Windows VM, as this cannot be run under PV mode, IMHO, it will make more sense.

                Also, I didn't see in the article the 'cache' settings used for the guest disk in for KVM and Xen, I think those are way more relevant than the back end format of the image (Raw, qcow2 etc) to assess pure performance.

                Finally, PCI pass-through assigning a VGA card is not the most stable thing in the world, most likely hazardous and only a few have had good success on a limited set of GPUs, I believe the right solution is the dedicated driver as mentioned before, unfortunately those tend to be proprietary for now.
                This post is simply to say KVM went a long way, however Xen PV Guests are the standard so far, and don't take my word for it, just look at what is powering the Amazon EC2 or Rackspace Cloud ... So would very much like to see a Xen PV fedora 19 guest tested
                Last edited by Onion; 21 July 2013, 10:26 PM.

                Comment


                • #28
                  Originally posted by Onion View Post
                  Few points:
                  • KVM is a kernel module, it loads on a classic Linux kernel and can be unloaded, it then provides with the help of qemu emulated devices and VT acceleration a fully virtualised model: Your guest OS is fooled and loads and install onto emulated hardware presented in software (ya follow?), will it be windows, Linux or else

                  • Xen uses 2 models:
                  • a fully virtualised one with the help of VT extensions and qemu device back end, similarly to KVM, allowing it to run Windows (this is called Xen HVM or Hardware Assisted Virtualisation) or other OSes

                  • a para-virtualised one, or PV model, that runs modified / patched guests (nicely enough, the same Xen patch for the host and the VM) - This needs a Xen patched guest, so usually limited to Open Source OSes. The acceleration comes from a front-end/back end driver model and a special hypercall ABI

                  • Virtual Box uses Binary Translation + VT extension and even if useful and painless to run on your desktop, its performance in high I/O and CPU bound tasks should be below the rest.


                  Now, in my real world experience deploying the first 2 in heavy workloads, not only PV Linux guests on Xen (second mode, paravirtualized) are much much faster than the HVM Linux guest (especially for I/O), they have also way less contention and are way more snappy if you were to saturate the guest with several VMs (not just run one) I have seen Xen dom0 hosts supporting amounts of heavily loaded VMs (again in paravirt mode) while maintaining very good performance and responsiveness, not so much HVM. To his credit, Michael has indicated the Xen mode for the benchmark: HVM, there fore I will say the results are not representative of the best performance possible for running a Linux Virtual Machine on a Linux host. If he picked a Windows VM, as this cannot be run under PV mode, IMHO, it will make more sense.

                  Also, I didn't see in the article the 'cache' settings used for the guest disk in for KVM and Xen, I think those are way more relevant than the back end format of the image (Raw, qcow2 etc) to assess pure performance.

                  Finally, PCI pass-through assigning a VGA card is not the most stable thing in the world, most likely hazardous and only a few have had good success on a limited set of GPUs, I believe the right solution is the dedicated driver as mentioned before, unfortunately those tend to be proprietary for now.
                  This post is simply to say KVM went a long way, however Xen PV Guests are the standard so far, and don't take my word for it, just look at what is powering the Amazon EC2 or Rackspace Cloud ... So would very much like to see a Xen PV fedora 19 guest tested
                  Have you posted the tests with xen and kvm to backup your claims?
                  I have a question regarding two different types of virtualization. For our current project we did come to an agreement that we do not want to use paravirtualization. No modification of the kernel, etc. Reading about devs running away of XEN and joining the KVM band wagon I am not sure that we...

                  KVM uses virtio for pv. So far it can do balloon, disk, and network. Graphics should be coming when airlied finishes his work on virgl.
                  I wouldn't expect a massive difference between pv xen, and a properly configured kvm instance.

                  Comment


                  • #29
                    Originally posted by liam View Post
                    Have you posted the tests with xen and kvm to backup your claims?
                    I have a question regarding two different types of virtualization. For our current project we did come to an agreement that we do not want to use paravirtualization. No modification of the kernel, etc. Reading about devs running away of XEN and joining the KVM band wagon I am not sure that we...

                    KVM uses virtio for pv. So far it can do balloon, disk, and network. Graphics should be coming when airlied finishes his work on virgl.
                    I wouldn't expect a massive difference between pv xen, and a properly configured kvm instance.

                    Sure VirtIO provides paravirtualized drivers for KVM, making perf better and no I have not posted data as this relates to live production platforms. I cannot however agree with your general assumption that KVM and Xen PV will be similar, it all depends on workloads and settings. I agree with you that lightweight front end applications should not make too much of a difference, but as previously mentioned i believe with contention and back end I/O there will be. Heavy DB benchmarks running on 4 VMs, leaving barely 128 meg ram to the host to run for example.

                    I am not 'claiming' anything by the way, nor have any interest financial or else with any company relating to those technologies ... sorry if this was seen trolling for Xen. I am not. I simplym want to say Xen PV was forgotten in this benchmark and it incidentally happens to have the least I/O overhead, at least IMHE.

                    I simply wanted to suggest 2 things:
                    - "by default" benchmarks of virtualization like this one should include Xen PV if testing a linux image
                    - Contention is where i have seen a performance edge, as mentioned before, i'd like to see a benchmark when we nearly use all resources of the host and see how the VM behave

                    Comment


                    • #30
                      Originally posted by liam View Post
                      Have you posted the tests with xen and kvm to backup your claims?
                      I have a question regarding two different types of virtualization. For our current project we did come to an agreement that we do not want to use paravirtualization. No modification of the kernel, etc. Reading about devs running away of XEN and joining the KVM band wagon I am not sure that we...

                      KVM uses virtio for pv. So far it can do balloon, disk, and network. Graphics should be coming when airlied finishes his work on virgl.
                      I wouldn't expect a massive difference between pv xen, and a properly configured kvm instance.
                      Actually post #13 (by AnthonySmith, 26th May) on that link you posted reflects completely my experiences.

                      Comment

                      Working...
                      X