Announcement

Collapse
No announcement yet.

Ubuntu 11.10: Xen vs. KVM vs. VirtualBox

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    KVM offered the fastest performance in all of the tests except for SQLite, where VirtualBox was the fastest but that is due to a bug.
    -- Phoronix

    Not true - Xen won one of the benchmarks as well: NAS Parrallel Benchmarks v3.3 - Test/Class: IS.C

    Bare Metal: 141.82
    Virtual Box: 78.24
    Xen: 97.68
    KVM: 87.87

    But as others have said, it would be interesting to see the results of Xen without the HVM as well...

    Comment


    • #17
      Looks like I'll be content with Virtualbox' performance here.

      Comment


      • #18
        I second the virt-io question.
        Beyond that, it would have been interesting to compare networking performance.

        As general, these results more-or-less mirror the VBox vs. qemu-kvm benchmark I did in-house when we selected our VM solution (qemu-kvm).

        - Gilboa
        DEV: Intel S2600C0, 2xE52658V2, 32GB, 4x2TB + 2x3TB, GTX780, F21/x86_64, Dell U2711.
        SRV: Intel S5520SC, 2xX5680, 36GB, 4x2TB, GTX550, F21/x86_64, Dell U2412..
        BACK: Tyan Tempest i5400XT, 2xE5335, 8GB, 3x1.5TB, 9800GTX, F21/x86-64.
        LAP: ASUS N56VJ, i7-3630QM, 16GB, 1TB, 635M, F21/x86_64.

        Comment


        • #19
          Xen PVHVM drivers

          Originally posted by darkbasic View Post
          What's the point to test the ancients Xen HVM paths? PV would have been MUCH more interesting...
          First page of the articly says:
          "The only Xen issue encountered when testing it with an Ubuntu 11.10 guest and host was the need for manually loading the xen-blkfront driver for disk support."

          So that looks like they did actually use Xen PVHVM drivers in the HVM guest, so optimized drivers (xen-blkfront and xen-netfront) were available.
          Still it looks weird that Xen did so badly in that benchmark.. pretty much the opposite results were posted some time ago at XenSummit. Xen was faster than KVM in almost every benchmark..

          It would be nice to get more information about the benchmark setup so I could try the benchmark myself.

          Comment


          • #20
            Originally posted by Scullder View Post
            Yup, I use Xen, I've never used HVM for a linux guest and I don't get the point of it.
            You can also use LVM logical volumes on the host as disks for your guests, and I think it will change a lot of results
            what disk backend did this benchmark use in dom0? file: ?

            Comment


            • #21
              Originally posted by ppanula View Post
              Whats the point to Xen _HVM_ Linux, eg. emulated network and block io drivers, you bet its slower comparing to PV KVM, virtualbox, etc. drivers. Whole test is invalid if you compare in that way...
              The first page of the article says: "The only Xen issue encountered when testing it with an Ubuntu 11.10 guest and host was the need for manually loading the xen-blkfront driver for disk support.".

              That sentence looks like Xen PVHVM optimized (paravirtualized) drivers were actually used.. can someone confirm that?

              Comment


              • #22
                Guest configuration file?

                Are the guest configuration files available somewhere?

                Comment


                • #23
                  Hi,

                  I am currently writing my master's thesis about virtualization and I've been running Xen and KVM tests among other things. I've had similar results with Xen being quite slow - for example, its raw CPU speed in LINPACK test was only 90 % that of HW or KVM (which were very close). I've been using Ubuntu 10.04 and vanilla 3.0.0 kernel in my tests. Just yesterday I did a test with Xen Dom0 kernel 2.6.32.46 and for example the LINPACK result jumped to on par with the hardware. I've measured other quite interesting results also - for example Xen's idle power consumption was like 30 % higher with kernel 3.0.0. It really seems there's something badly wrong in the 3.0.0 kernel with Xen. Next, I'll try running my tests with the 3.2 release candidate kernel to see if things have changed.

                  Comment


                  • #24
                    This is fine for servers but what about desktop based benchmarks where you're much more likely to be running guests other than Linux?

                    I'd like to see some benchmarks in Windows running under these various hypervisors. In my experience KVM and Xen are really bad at that compared to VirtualBox or VMware.

                    The whole virtualization space is at somewhat of a frustrating crossroads.

                    VMware used to be top dog but then they switched focus off of Linux as a host before going full scatterbrained with 15-brazillion different products. In the early 2000's they were king and their code seemed well written and solid. These days they seem like they're in really bad shape on Linux hosts. Multiple monitor support is essentially nonfunctional for me and their GUI is slow and bloated. There are bugs in the Linux GUI that have been present for years and never fixed (I have filed reports only to have my bug closed with "it's fixed now" even though it wasn't actually fixed).

                    VirtualBox just doesn't seem like it's written very well, it has tons of bugs and I often see really bad host kernel faults with it. However, it is the only virtualization product I have used that is able to correctly use all 4 of my monitors in a dual-TwinView Xinerama setup (which is a convoluted setup because multi-monitor on Linux sucks in general, but I digress). Performance is good. The user interface is a bit convoluted compared to VMware. This would probably be my choice of products if it wasn't so buggy. I worry about security issues with it, somewhat due to the general code quality in general and focus on non-server applications.

                    Xen is decent for server only stuff, like for providing VPS hosting services. It has a fairly long term proven security record with good separation between the host and guests and guests from other guests. It seems designed more for a segmented model of virtualization where you're dividing up a server into discreet units (ie. each domU gets a fixed amount of RAM, disk, whatever.. no sharing). Mostly only for running Linux on Linux. Really it's just a fancy paravirtualized system.

                    KVM is similar to Xen but it's not so focused on the segmented server model. As we see in these benchmarks it's faster than Xen. Security has yet to be proven with it though. Again, it's mostly good at running Linux on Linux with the paravirtualization drivers. Windows doesn't seem to run so well on it. Multi monitor support is complicated because you have to use SPICE/QXL type stuff (I'm not sure how well it works because I haven't been able to get it to work; seems like performance won't be good but I don't know for sure). This seems like the good long term choice, assuming something better doesn't come along. It's just lacking so much right now, good disk and video drivers for non-Linux guests for instance... or just performance in general when not using Linux guests (again, this goes back to it being more of a fancy paravirtualization system that falls back to slow QEmu stuff when it can't paravirtualize).

                    QEmu by itself is dog slow and the pieces of it used by Xen and KVM are probably what weighs them down, especially with non-Linux guests.

                    Comment


                    • #25
                      Ok, just ran a LINPACK test with kernel 3.2.0-rc1. Same bad results as with kernel 3.0.0. More tests are on the way, but right now it seems to me that Xen is broken after its merging to the mainline kernel.

                      Comment


                      • #26
                        Originally posted by Minigun View Post
                        Ok, just ran a LINPACK test with kernel 3.2.0-rc1. Same bad results as with kernel 3.0.0. More tests are on the way, but right now it seems to me that Xen is broken after its merging to the mainline kernel.
                        That performance difference could be caused by missing Xen acpi cpufreq patches from upstream Linux 3.x kernels. The patches in question are still work-in-progress and are currently planned for inclusion in Linux 3.3 kernel. The Linux 2.6.32.x dom0 kernel from xen.git xen/stable-2.6.32.x branch has those patches included, so it's able to use the best performing cpufreq states. People willing to test xen acpi cpufreq patches with Linux 3.x kernel are able to fetch the patches from Konrad's git repository.

                        More information about Linux kernel Xen pvops features and status here: http://wiki.xen.org/xenwiki/XenParavirtOps

                        Comment


                        • #27
                          xen.org re-run of the Phoronix benchmark

                          Here's a re-run of the Phoronix Xen vs. KVM vs. VirtualBox benchmarks, with the missing Xen dom0 ACPI cpufreq patches added to the dom0 kernel: http://blog.xen.org/index.php/2011/1...-vs-kvm-redux/

                          The performance numbers are very different there. Please take a look.

                          Comment


                          • #28
                            The Xen ACPI cpufreq / power management patches are now included in upstream Linux 3.4.x and later kernel versions. The driver in question is called "xen_acpi_processor.ko".

                            Comment

                            Working...
                            X