Announcement

Collapse
No announcement yet.

Intel Core i7 Virtualization On Linux

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel Core i7 Virtualization On Linux

    Phoronix: Intel Core i7 Virtualization On Linux

    Earlier this month we published Intel Core i7 Linux benchmarks that looked at the overall desktop performance when running Ubuntu Linux. One area we had not looked at in the original article was the virtualization performance, but we are back today with Intel Core i7 920 Linux benchmarks when testing out the KVM hypervisor and Sun xVM VirtualBox. In this article we are providing a quick look at Intel's Nehalem virtualization performance on Linux.

    http://www.phoronix.com/vr.php?view=13734

  • #2
    KVM kicks ass. It really really does.

    So does the Virt-manager and libvirt stuff. Not only supports KVM, but the Qemu + Kqemu alternative for non-virtualization-enabled hardware.

    Seriously. I have no doubt that KVM is going to be the dominate virtualization technique used on Linux, along with libvirt, and other associated things.

    On virtualized machines the I/O performance is the current limitation. That is the disk performance and network performance especially. With KVM the Linux kernel has built in paravirt drivers. These are drivers that are specially made for running in a virtualized environment and are able to avoid much of the overhead of trying to use emulated hardware and real drivers.

    Network performance is especially affected by this. The best performing fully virtualized card would be the emulated Intel e1000 1Gb/s nic. Doing benchmarks in a earlier version of KVM I was able to get a 300% improvement in performance by switching to paravirt network driver. With lower cpu usage in BOTH the guest and host systems. (in a dual-core system the guest being restricted to a single cpu and the host primarially using the other)

    For Windows there is a paravirt driver for network, but not for block.. yet.

    -------------

    But the terrific thing about KVM is it's ability to deliver enterprise level features but retain the userfriendliness of things like virtualbox or parrellels.

    Now the userland and configuration stuff is not up to the same level of Virtualbox, yet, but it won't take long.

    ---------------


    So far I've done a install of Debian, OpenBSD, FreeBSD, and Windows XP pro in my virt-manager managed KVM environment and they all work flawlessly so far, not that I have had much of a chance to excersize them.

    I am working on a install Windows 2008 and pretty soon I'll get a install of Vista going. Then I am going to try to tackle OS X and see how that goes...

    Comment


    • #3
      It appears that you can improve the disk IO performance of KVM by using the LVM backend. There seems to be some room for improvement though. Hopefully it will come soon.

      http://kerneltrap.org/mailarchive/li...09/1/4/4590044

      F

      Comment


      • #4
        A note about VirtualBox

        Fantastic article as always!

        I feel it's worth pointing out VirtualBox only presents a single CPU to the Guest OS unlike KVM which uses however many you say it can. I *really* hope Sun get a move on with improving multi-core/cpu support in the near future.

        For what it's worth I made a switch from VMWare to VirtualBox about 8 months back and have to say it's very promising how much headway Sun have been making - particularly OpenGL and DirectX support for Windows and *nix

        I guess I should try KVM again - I've never had joy getting it to work properly in the past and that's why I stuck with a third-party package that just 'works'

        It's probably on an article somewhere but this is the sort of review that would be great with equivelant benchmarks for a Core2Quad to see how they stack-up to the i7. My main reason for multi-core is virtualisation and it'd be great to see if it's worth an upgrade or not yet.

        Keep up the fab work Michael !

        Comment


        • #5
          Ya, to clarify there is no 'lvm backend' for KVM. It's your using logical volumes as raw block devices to be assigned as hard drives to guests. Treat them like raw devices. Just to avoid confusion.

          I used to do that when I ran Xen on my desktop, but I added iSCSI to the mix. I had a file server using LVM to divide up storage into logical volumes...

          So:

          File server hosted Logical volumes ---> iSCSI luns ---> 1GB network ---> Virtual machines.

          So I had a few different VMs I'd fire up for one purpose or another.

          Comment


          • #6
            Thanks for the nice article, quite what I was looking for, as we need some virtualisation at work, and the main war is between vbox and kvm

            However I miss some thing, which you can hopefully add/clarify:
            1. Was VT enabled in VirtualBox? IIRC 2.1.4 doesn't do this by default. If it wasn't - would you rerun the tests where VirtualBox was *really* bad compared to the others?
            2. Which exact version of KVM (kernel und userspace) were you running? The one from the 2.6.28 kernel plus 0.84 from jaunty?
            3. Would you share the exact options for KVM and VirtualBox (IDE vs S-ATA vs SCSI drive emulation etc)?
            4. Is there any reason you choose the non-free version over the OSE one?

            Regards and again thanks for the article
            Zhenech

            Comment


            • #7
              Originally posted by bbz231 View Post
              I feel it's worth pointing out VirtualBox only presents a single CPU to the Guest OS unlike KVM which uses however many you say it can. I *really* hope Sun get a move on with improving multi-core/cpu support in the near future.
              Isn't VirtualBox a fork of Xen???

              Comment


              • #8
                Originally posted by drag View Post
                File server hosted Logical volumes ---> iSCSI luns ---> 1GB network ---> Virtual machines.
                So you had a LUN for each LV ???

                If so, how could that be?

                Btw. I have never tried to use iSCSI, LUN or LVM... Yet

                Comment


                • #9
                  Right now I use Xen on CentOS with images for each guest.

                  It would be very interesting to see a test of image vs LVM vs partition.

                  Comment


                  • #10
                    How would perform XEN? A comparison would be nice.

                    Comment


                    • #11
                      Originally posted by Zhenech View Post
                      Thanks for the nice article, quite what I was looking for, as we need some virtualisation at work, and the main war is between vbox and kvm

                      However I miss some thing, which you can hopefully add/clarify:
                      1. Was VT enabled in VirtualBox? IIRC 2.1.4 doesn't do this by default. If it wasn't - would you rerun the tests where VirtualBox was *really* bad compared to the others?
                      In VBox 2.2 it is enabled by default.

                      http://www.virtualbox.org/wiki/Changelog
                      VT-x/AMD-V are enabled by default for newly created virtual machines

                      Comment


                      • #12
                        Originally posted by Louise View Post
                        So you had a LUN for each LV ???

                        If so, how could that be?

                        Btw. I have never tried to use iSCSI, LUN or LVM... Yet
                        it's pretty simple.

                        With iSCSI you conifgure a block device to be exported over the network. LUN is just SCSI term for identifying a drive.

                        So what I did is just had a simple software RAID 5 array that I divided up using logical volume management. I'd create a logical volume to be used for a VM then configured iSCSI enterprise target to use the logical volume as a drive that gets exported over the network.

                        With Xen I'd then use the Linux kernel's iSCSI support on my desktop to access it and then use one of those as a raw device for each guest VM.

                        With KVM and virt-manager stuff they have it setup so that the VM can be configured to use iSCSI directly. I have not tried it with KVM yet.

                        --------------------------

                        This sort of thing is important if your going to use VMs for business or whatever and want to take advantage of the "Live migration" features. One of the requirements are that you have a common storage backend so that the VM has consistant access to it's storage after the move.

                        Then, for reliability, you'd have to take advantage of other features like Ethernet bonding and Linux multi-path and maybe DRBD or other storage replication features so that you'd have the ability to replicate storage and create highly reliable storage networks. Although all the details are beyond me, I've done research but no actual implimentation for stuff like that.

                        That's one of he kick-ass things about KVM is that you can then more easily take advantage of all the little features, drivers, and hardware support that have been developed for Linux server use in the enterprise.

                        ---------------------

                        Oh, and for these block-level protocols like iSCSI or stuff from Redhat's clustering things, or fiberchannel... the security for these things suck huge donkey balls. Their 'security' features are more for avoiding accidents and not so much for stopping attackers. So for security purposes you'd generally want to use a private network just for the storage. It's good for performance. Of course for more casual uses like home usage it's not that important.
                        Last edited by drag; 04-22-2009, 10:05 AM.

                        Comment


                        • #13
                          It would be interesting to re-run the KVM test restricting it to a single CPU in order to compare the overhead of virtualization.

                          Comment


                          • #14
                            Originally posted by drag View Post
                            With iSCSI you conifgure a block device to be exported over the network. LUN is just SCSI term for identifying a drive.
                            OK, so LUN could be "/vol/iscsivol/tesztlun0" but is never exposed to the client?

                            Originally posted by drag View Post
                            So what I did is just had a simple software RAID 5 array that I divided up using logical volume management. I'd create a logical volume to be used for a VM then configured iSCSI enterprise target to use the logical volume as a drive that gets exported over the network.

                            With Xen I'd then use the Linux kernel's iSCSI support on my desktop to access it and then use one of those as a raw device for each guest VM.
                            Very interesting. Could you post the config file for the Xen guest here?

                            RHEL and Novell uses different ways of booting a Xen guest, and they also use different ways to access the image. One uses "file:" and the other uses "tap:".

                            I have then read when using LVM I should use "phy:".

                            But how does it look like with iSCSI ?

                            What OS are you using as your Xen host?

                            Comment


                            • #15
                              OK, so LUN could be "/vol/iscsivol/tesztlun0" but is never exposed to the client?

                              Ya pretty much like that.

                              As you know iSCSI is just SCSI commands encapsulated in TCP packets.

                              The 'server' portion of iSCSI is called the 'iSCSI target' and the 'client' is called the 'iSCSI initator'. Originally it was intended that you'd just slap some SCSI drives into a network adapter box and then computers can access the drives over the network with their own hardware adapters.

                              But you can get software targets and initators, also. So I used a software 'iSCSI Enterprise Target' for the server portion and then the built-in iscsi initator in the Linux kernel.

                              And Linux being Linux any block device can be used as a drive. A drive partition, a USB flash drive, a file-backed loop device, logical volume, etc etc. It's all the same, more or less.

                              Using the iSCSI protocol then I can just export any block device I feel like over the network.

                              So ya then it shows up as /dev/whatever. Its been a long long time since I did this. Years and years, so I don't remember all the details and I am sure that now with UDEV and stuff it's changed.

                              So for Xen it would be the same as setting up a harddrive partition for you to use. All the differences and network details are abstracted away from anything to do with the VM or whatever.

                              -------------------------------

                              If you go and use virt-manager you'd see that when setting up a hardware for a VM it does have iSCSI support. I don't know if it uses it directly and lets the VM use it directly as a SCSI device or if it mounts it as a block device in the host system to be used as a generic drive or whatever. I haven't tried that out yet.

                              ------------------------------

                              There are other Network-Block protocols that Linux supports... Like:

                              * ATA over Ethernet (like iSCSI, but instead of SCSI commands in TCP, they are using ATA command in ethernet frames.. it should have less overhead but iSCSI is more mature and has better support and ends up being faster)

                              * NBD -- network block device

                              * GNBD -- GFS network block devices... this is supplied as part of Redhat's GFSv1/v2 cluster-aware file system.

                              iSCSI is nice because it's standard and lots of different devices and OSes support it.

                              ----------------------------

                              Just keep in mind that if you want to use iSCSI to real systems (not VMs) from network you'll need a local harddrive still for swap devices. There are nasty deadlocks and race conditions assocated with booting from the network and running out of RAM.. (you need memory to read data from the network, but you need data off the network for storage, but you need the storage for swap because your out of ram... etc) So having local drive for swap solves that.

                              And if you want to have multiple OSes access the same file systems on the same block device at the same time you'll need a cluster-aware file system like OCFSv2 or GFS. (OCFSv2 is in the Linux kernel right now. GFS is part of Redhat's clustering package, along with GNBD and CLVM (cluster logical volume management). That way they coordinate file locking and stuff like that so they don't accedently corrupt the file system by stepping on each other's toes.

                              ----------------------------

                              If you want to play around or use iSCSI or other things like that the best and easiest way may be to use OpenFiler.

                              It's http://www.openfiler.com.

                              Very clever,, very nice to use.

                              Just get your old PC, get a few 1TB drives, slap them in, and use Openfiler to configure them into RAID 10 and you'd have a very kick-ass network storage for holding dozens and dozens of VMs.

                              Very nice. You should be able to get very close to native performance using that. With Jumbo packets, decent hardware, and nice tuning you should be able to get about 60-80 MB/s read/write speeds using gigabit ethernet for the host system. Of course the guest systems have limitations based on the VM technology.
                              Last edited by drag; 04-22-2009, 04:48 PM.

                              Comment

                              Working...
                              X