Ubuntu 12.10: Linux KVM vs. Xen Virtualization Preview
Phoronix: Ubuntu 12.10: Linux KVM vs. Xen Virtualization Preview
With Ubuntu 12.10 coming up in just a few months, here are our first virtualization benchmarks from the forthcoming "Quantal Quetzal" operating system. Compared in this article is the raw/bare-metal performance to Linux KVM and Xen virtualization from the latest Linux 3.5 kernel.
Thanks for these benchmarks but in my PoV the performance is not the important part. For ME the Tools are much more important.
The conclusion from this article: Run your operating system(s) on bare metal.
I would like to see like:
Xen - running 5 VM, 10 VM, 15 VM
KVM - running 5 VM, 10 VM, 15 VM
VirtualBox - running 5 VM, 10 VM, 15 VM
Would be more complicated, since they would need equal load in all situations...
XEN VGA PASSTHROUGHT Directx tests for MS WOS VT SYSTEMS
Thanks in advance for your work.
What I do like more about XEN, is VGA PASSTHROUGHT
Using native drivers for virtualization.
Especially MS WOS Directx tests
Some day, using this tech, you will be able to play Directx games at your Linux desktop with a XEN dom virtualizing MS WOS
And, without antivirus, similar to bare metal + antivirus, benchmarks for Directx gaming.
And of course if you only use it for gaming virtual HDD copies will help you avoiding MS WOS malware problems and other issues restoring the original images.
I would like to read the evolution of this amazing technology - unfortunately my actual machine do not support this VGA passthrought tech and I cannot make tests - But each time you make this tests, you forget to include the VGA passthrought settings, even for GNU/linux opengl and wine directx tests vs bare metal.
And i do not thing any other better blog where to read this rare benchmarks.
I tested Xen VGA Passthru (ASUS Crosshair IV + Phenom X6 + PCI-E 1 : HD5870 (Windows HVM) + PCI-E 2 : HD 6450 (Linux)) and it's very easy to configure in the secondary passthru configuration, it means you have to keep the Cirrus virtual GPU from Xen as your Windows primary adapter but it is not used (in the end, no trouble at all).
I dedicated the HD6450 to Ubuntu/Arch and the HD5870 to Windows.
I ran a Unigine benchmark and I got 95% of the native Windows performance (4 vCPU HVM vs 6 cores native Windows, 4GB HVM vs 8GB native, same Catalyst version).
From this results and what I have read in forums, the performance in many cases is (almost) the same as Windows on bare-metal, so if you have time & money it is a very good solution (and space on your desk to double everything ^^).
Edit: it also works pretty well with KVM now : http://tavi-tech.blogspot.fr/2012/05...ra-17-and.html
Last edited by Thanat0s; 07-16-2012 at 01:25 PM.
It would be fair at this point to benchmark VirtualBox against these two considering that it's also GPL licensed...... I'm quite sure it will beat both of them any day in pure terms of performance. VirtualBox also is the easiest to manage both from UI and console, and it has 99% of features anyone typically needs from a virtual solution.
- easy to use even for the most noob, without the cost of removing super advanced features
- performance is great and comparable to that of Vmware and sometimes even better (knocks out KVM and XEN on this)
- supports all kind of hard disk interfaces, including IDE, SATA, SAS and direct access to physical partition
- has plenty of features most of you need
- can run without UI
- very easy to manage from CLI
- CrossPlatform (Linux, Solaris, Windows, MacOS.... what more can you desire? ... )
- supports DKMS and latest kernel releases
My only complaint of VirtualBox is that it doesn't have support of LPT Parallel port interface as it seems to be quite often in some governemnt old offices.....but this is my personal story.
These are my 2 cents on this topic.
I know this was probably supposed to be a like for like comparison... but why wasn't the Xen guest a PV guest rather than HVM? I'm not sure why someone would choose HVM with Xen unless there was no other option (eg windows guests).
My testing with Xen 4.x showed PV guests to have approx 4 * the memory benchmark performance over HVM guests which makes a huge difference in SMP applications.
Michael, could you please replicate the tests with a PV guest?
(I'm not really a Xen fan, but this seems like a huge oversight)
Michael used to benchmark vb but, iirc, its performance was so bad that it seemed a bit pointless to me.
Originally Posted by bulletxt
Has there been a massive rewrite or something?
Do you have a reference for your remark that vb is vastly faster than xen/kvm, and faster than vmw?
I'd personally love to see a KVM/Xen/ESX shoot out.
From my own testing, VMWare ESX has pretty awful performance, particularly under load. I/O tasks in particular are horrendous. Yet you try and get an average business to consider anything but VMWare, even with terrible performance and horrendous licensing costs.