The first page of the article says: "The only Xen issue encountered when testing it with an Ubuntu 11.10 guest and host was the need for manually loading the xen-blkfront driver for disk support.".
Originally Posted by ppanula
That sentence looks like Xen PVHVM optimized (paravirtualized) drivers were actually used.. can someone confirm that?
Guest configuration file?
Are the guest configuration files available somewhere?
I am currently writing my master's thesis about virtualization and I've been running Xen and KVM tests among other things. I've had similar results with Xen being quite slow - for example, its raw CPU speed in LINPACK test was only 90 % that of HW or KVM (which were very close). I've been using Ubuntu 10.04 and vanilla 3.0.0 kernel in my tests. Just yesterday I did a test with Xen Dom0 kernel 220.127.116.11 and for example the LINPACK result jumped to on par with the hardware. I've measured other quite interesting results also - for example Xen's idle power consumption was like 30 % higher with kernel 3.0.0. It really seems there's something badly wrong in the 3.0.0 kernel with Xen. Next, I'll try running my tests with the 3.2 release candidate kernel to see if things have changed.
This is fine for servers but what about desktop based benchmarks where you're much more likely to be running guests other than Linux?
I'd like to see some benchmarks in Windows running under these various hypervisors. In my experience KVM and Xen are really bad at that compared to VirtualBox or VMware.
The whole virtualization space is at somewhat of a frustrating crossroads.
VMware used to be top dog but then they switched focus off of Linux as a host before going full scatterbrained with 15-brazillion different products. In the early 2000's they were king and their code seemed well written and solid. These days they seem like they're in really bad shape on Linux hosts. Multiple monitor support is essentially nonfunctional for me and their GUI is slow and bloated. There are bugs in the Linux GUI that have been present for years and never fixed (I have filed reports only to have my bug closed with "it's fixed now" even though it wasn't actually fixed).
VirtualBox just doesn't seem like it's written very well, it has tons of bugs and I often see really bad host kernel faults with it. However, it is the only virtualization product I have used that is able to correctly use all 4 of my monitors in a dual-TwinView Xinerama setup (which is a convoluted setup because multi-monitor on Linux sucks in general, but I digress). Performance is good. The user interface is a bit convoluted compared to VMware. This would probably be my choice of products if it wasn't so buggy. I worry about security issues with it, somewhat due to the general code quality in general and focus on non-server applications.
Xen is decent for server only stuff, like for providing VPS hosting services. It has a fairly long term proven security record with good separation between the host and guests and guests from other guests. It seems designed more for a segmented model of virtualization where you're dividing up a server into discreet units (ie. each domU gets a fixed amount of RAM, disk, whatever.. no sharing). Mostly only for running Linux on Linux. Really it's just a fancy paravirtualized system.
KVM is similar to Xen but it's not so focused on the segmented server model. As we see in these benchmarks it's faster than Xen. Security has yet to be proven with it though. Again, it's mostly good at running Linux on Linux with the paravirtualization drivers. Windows doesn't seem to run so well on it. Multi monitor support is complicated because you have to use SPICE/QXL type stuff (I'm not sure how well it works because I haven't been able to get it to work; seems like performance won't be good but I don't know for sure). This seems like the good long term choice, assuming something better doesn't come along. It's just lacking so much right now, good disk and video drivers for non-Linux guests for instance... or just performance in general when not using Linux guests (again, this goes back to it being more of a fancy paravirtualization system that falls back to slow QEmu stuff when it can't paravirtualize).
QEmu by itself is dog slow and the pieces of it used by Xen and KVM are probably what weighs them down, especially with non-Linux guests.
Ok, just ran a LINPACK test with kernel 3.2.0-rc1. Same bad results as with kernel 3.0.0. More tests are on the way, but right now it seems to me that Xen is broken after its merging to the mainline kernel.
That performance difference could be caused by missing Xen acpi cpufreq patches from upstream Linux 3.x kernels. The patches in question are still work-in-progress and are currently planned for inclusion in Linux 3.3 kernel. The Linux 2.6.32.x dom0 kernel from xen.git xen/stable-2.6.32.x branch has those patches included, so it's able to use the best performing cpufreq states. People willing to test xen acpi cpufreq patches with Linux 3.x kernel are able to fetch the patches from Konrad's git repository.
Originally Posted by Minigun
More information about Linux kernel Xen pvops features and status here: http://wiki.xen.org/xenwiki/XenParavirtOps
xen.org re-run of the Phoronix benchmark
Here's a re-run of the Phoronix Xen vs. KVM vs. VirtualBox benchmarks, with the missing Xen dom0 ACPI cpufreq patches added to the dom0 kernel: http://blog.xen.org/index.php/2011/1...-vs-kvm-redux/
The performance numbers are very different there. Please take a look.
The Xen ACPI cpufreq / power management patches are now included in upstream Linux 3.4.x and later kernel versions. The driver in question is called "xen_acpi_processor.ko".