Announcement

Collapse
No announcement yet.

Intel Core i7 Virtualization On Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel Core i7 Virtualization On Linux

    Phoronix: Intel Core i7 Virtualization On Linux

    Earlier this month we published Intel Core i7 Linux benchmarks that looked at the overall desktop performance when running Ubuntu Linux. One area we had not looked at in the original article was the virtualization performance, but we are back today with Intel Core i7 920 Linux benchmarks when testing out the KVM hypervisor and Sun xVM VirtualBox. In this article we are providing a quick look at Intel's Nehalem virtualization performance on Linux.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    KVM kicks ass. It really really does.

    So does the Virt-manager and libvirt stuff. Not only supports KVM, but the Qemu + Kqemu alternative for non-virtualization-enabled hardware.

    Seriously. I have no doubt that KVM is going to be the dominate virtualization technique used on Linux, along with libvirt, and other associated things.

    On virtualized machines the I/O performance is the current limitation. That is the disk performance and network performance especially. With KVM the Linux kernel has built in paravirt drivers. These are drivers that are specially made for running in a virtualized environment and are able to avoid much of the overhead of trying to use emulated hardware and real drivers.

    Network performance is especially affected by this. The best performing fully virtualized card would be the emulated Intel e1000 1Gb/s nic. Doing benchmarks in a earlier version of KVM I was able to get a 300% improvement in performance by switching to paravirt network driver. With lower cpu usage in BOTH the guest and host systems. (in a dual-core system the guest being restricted to a single cpu and the host primarially using the other)

    For Windows there is a paravirt driver for network, but not for block.. yet.

    -------------

    But the terrific thing about KVM is it's ability to deliver enterprise level features but retain the userfriendliness of things like virtualbox or parrellels.

    Now the userland and configuration stuff is not up to the same level of Virtualbox, yet, but it won't take long.

    ---------------


    So far I've done a install of Debian, OpenBSD, FreeBSD, and Windows XP pro in my virt-manager managed KVM environment and they all work flawlessly so far, not that I have had much of a chance to excersize them.

    I am working on a install Windows 2008 and pretty soon I'll get a install of Vista going. Then I am going to try to tackle OS X and see how that goes...

    Comment


    • #3
      It appears that you can improve the disk IO performance of KVM by using the LVM backend. There seems to be some room for improvement though. Hopefully it will come soon.

      Thirty years ago, Linus Torvalds was a 21 year old student at the University of Helsinki when he first released the Linux Kernel. His announcement started, “I’m doing a (free) operating system (just a hobby, won't be big and professional…)”. Three decades later, the top 500 supercomputers are all running Linux, as are over 70% of all smartphones. Linux is clearly both big and professional.


      F

      Comment


      • #4
        A note about VirtualBox

        Fantastic article as always!

        I feel it's worth pointing out VirtualBox only presents a single CPU to the Guest OS unlike KVM which uses however many you say it can. I *really* hope Sun get a move on with improving multi-core/cpu support in the near future.

        For what it's worth I made a switch from VMWare to VirtualBox about 8 months back and have to say it's very promising how much headway Sun have been making - particularly OpenGL and DirectX support for Windows and *nix

        I guess I should try KVM again - I've never had joy getting it to work properly in the past and that's why I stuck with a third-party package that just 'works'

        It's probably on an article somewhere but this is the sort of review that would be great with equivelant benchmarks for a Core2Quad to see how they stack-up to the i7. My main reason for multi-core is virtualisation and it'd be great to see if it's worth an upgrade or not yet.

        Keep up the fab work Michael !

        Comment


        • #5
          Ya, to clarify there is no 'lvm backend' for KVM. It's your using logical volumes as raw block devices to be assigned as hard drives to guests. Treat them like raw devices. Just to avoid confusion.

          I used to do that when I ran Xen on my desktop, but I added iSCSI to the mix. I had a file server using LVM to divide up storage into logical volumes...

          So:

          File server hosted Logical volumes ---> iSCSI luns ---> 1GB network ---> Virtual machines.

          So I had a few different VMs I'd fire up for one purpose or another.

          Comment


          • #6
            Thanks for the nice article, quite what I was looking for, as we need some virtualisation at work, and the main war is between vbox and kvm

            However I miss some thing, which you can hopefully add/clarify:
            1. Was VT enabled in VirtualBox? IIRC 2.1.4 doesn't do this by default. If it wasn't - would you rerun the tests where VirtualBox was *really* bad compared to the others?
            2. Which exact version of KVM (kernel und userspace) were you running? The one from the 2.6.28 kernel plus 0.84 from jaunty?
            3. Would you share the exact options for KVM and VirtualBox (IDE vs S-ATA vs SCSI drive emulation etc)?
            4. Is there any reason you choose the non-free version over the OSE one?

            Regards and again thanks for the article
            Zhenech

            Comment


            • #7
              Originally posted by bbz231 View Post
              I feel it's worth pointing out VirtualBox only presents a single CPU to the Guest OS unlike KVM which uses however many you say it can. I *really* hope Sun get a move on with improving multi-core/cpu support in the near future.
              Isn't VirtualBox a fork of Xen???

              Comment


              • #8
                Originally posted by drag View Post
                File server hosted Logical volumes ---> iSCSI luns ---> 1GB network ---> Virtual machines.
                So you had a LUN for each LV ???

                If so, how could that be?

                Btw. I have never tried to use iSCSI, LUN or LVM... Yet

                Comment


                • #9
                  Right now I use Xen on CentOS with images for each guest.

                  It would be very interesting to see a test of image vs LVM vs partition.

                  Comment


                  • #10
                    How would perform XEN? A comparison would be nice.

                    Comment

                    Working...
                    X