Ubuntu 12.10: Linux KVM vs. Xen Virtualization Preview
Phoronix: Ubuntu 12.10: Linux KVM vs. Xen Virtualization Preview
With Ubuntu 12.10 coming up in just a few months, here are our first virtualization benchmarks from the forthcoming "Quantal Quetzal" operating system. Compared in this article is the raw/bare-metal performance to Linux KVM and Xen virtualization from the latest Linux 3.5 kernel.
Thanks for these benchmarks but in my PoV the performance is not the important part. For ME the Tools are much more important.
The conclusion from this article: Run your operating system(s) on bare metal.
I would like to see like:
Xen - running 5 VM, 10 VM, 15 VM
KVM - running 5 VM, 10 VM, 15 VM
VirtualBox - running 5 VM, 10 VM, 15 VM
Would be more complicated, since they would need equal load in all situations...
XEN VGA PASSTHROUGHT Directx tests for MS WOS VT SYSTEMS
Thanks in advance for your work.
What I do like more about XEN, is VGA PASSTHROUGHT
Using native drivers for virtualization.
Especially MS WOS Directx tests
Some day, using this tech, you will be able to play Directx games at your Linux desktop with a XEN dom virtualizing MS WOS
And, without antivirus, similar to bare metal + antivirus, benchmarks for Directx gaming.
And of course if you only use it for gaming virtual HDD copies will help you avoiding MS WOS malware problems and other issues restoring the original images.
I would like to read the evolution of this amazing technology - unfortunately my actual machine do not support this VGA passthrought tech and I cannot make tests - But each time you make this tests, you forget to include the VGA passthrought settings, even for GNU/linux opengl and wine directx tests vs bare metal.
And i do not thing any other better blog where to read this rare benchmarks.
I tested Xen VGA Passthru (ASUS Crosshair IV + Phenom X6 + PCI-E 1 : HD5870 (Windows HVM) + PCI-E 2 : HD 6450 (Linux)) and it's very easy to configure in the secondary passthru configuration, it means you have to keep the Cirrus virtual GPU from Xen as your Windows primary adapter but it is not used (in the end, no trouble at all).
I dedicated the HD6450 to Ubuntu/Arch and the HD5870 to Windows.
I ran a Unigine benchmark and I got 95% of the native Windows performance (4 vCPU HVM vs 6 cores native Windows, 4GB HVM vs 8GB native, same Catalyst version).
From this results and what I have read in forums, the performance in many cases is (almost) the same as Windows on bare-metal, so if you have time & money it is a very good solution (and space on your desk to double everything ^^).
Edit: it also works pretty well with KVM now : http://tavi-tech.blogspot.fr/2012/05...ra-17-and.html
Last edited by Thanat0s; 07-16-2012 at 01:25 PM.
That would be the conclusion if this so called benchmark was run on the same hardware.
Originally Posted by M1kkko
It has already been pointed out that Xen has various options for virtualization. Benchmarks can be very helpful, particularly if they are setup to provide optimum performance. The exact setup must be documented.
I was surprised to see Xen tested with an Ubuntu HVM guest, and from the discussion here I gather that this guest wasn't even using PVHVM drivers.
+1 for tangram and pasik and others who pointed that out.
I had a second look at the benchmark and noticed that Xen was using ext4 partitions. IIRC the Xen wiki clearly states that Xen should be used with LVM for best disk I/O performance. Of course one would have to use the PVoHVM drivers to reach the full potential under HVM, and as already mentioned before, why not use a PV guest in the first place?
Running a benchmark where one contender is set up in a more or less optimal way (KVM with VirtIO driver) and the other (Xen) using what seems to be a low-performance setup is highly questionable. It hasn't been explained why HVM was chosen over PV (or why not using both options?), nor is it clear whether or not the PVoHVM drivers were used. It looks like most of the tests where Xen under-performs are disk I/O related.
Since I'm running a Xen system and I don't experience such performance issues, perhaps there is something wrong with the benchmark? For reference, my Windows 7 HVM guest achieves a Windows Experience Index (WEI) of 7.8 (of 7.9) for disk I/O using a SSD and GPLPV drivers. The driver itself helped improve the WEI by 1 index point.
The way it stands now, the conclusion of the benchmark is simply misleading.