Announcement

Collapse
No announcement yet.

Intel Core i7 Virtualization On Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Zhenech View Post
    Thanks for the nice article, quite what I was looking for, as we need some virtualisation at work, and the main war is between vbox and kvm

    However I miss some thing, which you can hopefully add/clarify:
    1. Was VT enabled in VirtualBox? IIRC 2.1.4 doesn't do this by default. If it wasn't - would you rerun the tests where VirtualBox was *really* bad compared to the others?
    In VBox 2.2 it is enabled by default.


    VT-x/AMD-V are enabled by default for newly created virtual machines

    Comment


    • #12
      Originally posted by Louise View Post
      So you had a LUN for each LV ???

      If so, how could that be?

      Btw. I have never tried to use iSCSI, LUN or LVM... Yet
      it's pretty simple.

      With iSCSI you conifgure a block device to be exported over the network. LUN is just SCSI term for identifying a drive.

      So what I did is just had a simple software RAID 5 array that I divided up using logical volume management. I'd create a logical volume to be used for a VM then configured iSCSI enterprise target to use the logical volume as a drive that gets exported over the network.

      With Xen I'd then use the Linux kernel's iSCSI support on my desktop to access it and then use one of those as a raw device for each guest VM.

      With KVM and virt-manager stuff they have it setup so that the VM can be configured to use iSCSI directly. I have not tried it with KVM yet.

      --------------------------

      This sort of thing is important if your going to use VMs for business or whatever and want to take advantage of the "Live migration" features. One of the requirements are that you have a common storage backend so that the VM has consistant access to it's storage after the move.

      Then, for reliability, you'd have to take advantage of other features like Ethernet bonding and Linux multi-path and maybe DRBD or other storage replication features so that you'd have the ability to replicate storage and create highly reliable storage networks. Although all the details are beyond me, I've done research but no actual implimentation for stuff like that.

      That's one of he kick-ass things about KVM is that you can then more easily take advantage of all the little features, drivers, and hardware support that have been developed for Linux server use in the enterprise.

      ---------------------

      Oh, and for these block-level protocols like iSCSI or stuff from Redhat's clustering things, or fiberchannel... the security for these things suck huge donkey balls. Their 'security' features are more for avoiding accidents and not so much for stopping attackers. So for security purposes you'd generally want to use a private network just for the storage. It's good for performance. Of course for more casual uses like home usage it's not that important.
      Last edited by drag; 22 April 2009, 10:05 AM.

      Comment


      • #13
        It would be interesting to re-run the KVM test restricting it to a single CPU in order to compare the overhead of virtualization.

        Comment


        • #14
          Originally posted by drag View Post
          With iSCSI you conifgure a block device to be exported over the network. LUN is just SCSI term for identifying a drive.
          OK, so LUN could be "/vol/iscsivol/tesztlun0" but is never exposed to the client?

          Originally posted by drag View Post
          So what I did is just had a simple software RAID 5 array that I divided up using logical volume management. I'd create a logical volume to be used for a VM then configured iSCSI enterprise target to use the logical volume as a drive that gets exported over the network.

          With Xen I'd then use the Linux kernel's iSCSI support on my desktop to access it and then use one of those as a raw device for each guest VM.
          Very interesting. Could you post the config file for the Xen guest here?

          RHEL and Novell uses different ways of booting a Xen guest, and they also use different ways to access the image. One uses "file:" and the other uses "tap:".

          I have then read when using LVM I should use "phy:".

          But how does it look like with iSCSI ?

          What OS are you using as your Xen host?

          Comment


          • #15
            OK, so LUN could be "/vol/iscsivol/tesztlun0" but is never exposed to the client?

            Ya pretty much like that.

            As you know iSCSI is just SCSI commands encapsulated in TCP packets.

            The 'server' portion of iSCSI is called the 'iSCSI target' and the 'client' is called the 'iSCSI initator'. Originally it was intended that you'd just slap some SCSI drives into a network adapter box and then computers can access the drives over the network with their own hardware adapters.

            But you can get software targets and initators, also. So I used a software 'iSCSI Enterprise Target' for the server portion and then the built-in iscsi initator in the Linux kernel.

            And Linux being Linux any block device can be used as a drive. A drive partition, a USB flash drive, a file-backed loop device, logical volume, etc etc. It's all the same, more or less.

            Using the iSCSI protocol then I can just export any block device I feel like over the network.

            So ya then it shows up as /dev/whatever. Its been a long long time since I did this. Years and years, so I don't remember all the details and I am sure that now with UDEV and stuff it's changed.

            So for Xen it would be the same as setting up a harddrive partition for you to use. All the differences and network details are abstracted away from anything to do with the VM or whatever.

            -------------------------------

            If you go and use virt-manager you'd see that when setting up a hardware for a VM it does have iSCSI support. I don't know if it uses it directly and lets the VM use it directly as a SCSI device or if it mounts it as a block device in the host system to be used as a generic drive or whatever. I haven't tried that out yet.

            ------------------------------

            There are other Network-Block protocols that Linux supports... Like:

            * ATA over Ethernet (like iSCSI, but instead of SCSI commands in TCP, they are using ATA command in ethernet frames.. it should have less overhead but iSCSI is more mature and has better support and ends up being faster)

            * NBD -- network block device

            * GNBD -- GFS network block devices... this is supplied as part of Redhat's GFSv1/v2 cluster-aware file system.

            iSCSI is nice because it's standard and lots of different devices and OSes support it.

            ----------------------------

            Just keep in mind that if you want to use iSCSI to real systems (not VMs) from network you'll need a local harddrive still for swap devices. There are nasty deadlocks and race conditions assocated with booting from the network and running out of RAM.. (you need memory to read data from the network, but you need data off the network for storage, but you need the storage for swap because your out of ram... etc) So having local drive for swap solves that.

            And if you want to have multiple OSes access the same file systems on the same block device at the same time you'll need a cluster-aware file system like OCFSv2 or GFS. (OCFSv2 is in the Linux kernel right now. GFS is part of Redhat's clustering package, along with GNBD and CLVM (cluster logical volume management). That way they coordinate file locking and stuff like that so they don't accedently corrupt the file system by stepping on each other's toes.

            ----------------------------

            If you want to play around or use iSCSI or other things like that the best and easiest way may be to use OpenFiler.

            It's http://www.openfiler.com.

            Very clever,, very nice to use.

            Just get your old PC, get a few 1TB drives, slap them in, and use Openfiler to configure them into RAID 10 and you'd have a very kick-ass network storage for holding dozens and dozens of VMs.

            Very nice. You should be able to get very close to native performance using that. With Jumbo packets, decent hardware, and nice tuning you should be able to get about 60-80 MB/s read/write speeds using gigabit ethernet for the host system. Of course the guest systems have limitations based on the VM technology.
            Last edited by drag; 22 April 2009, 04:48 PM.

            Comment


            • #16
              Originally posted by deanjo View Post
              In VBox 2.2 it is enabled by default.

              http://www.virtualbox.org/wiki/Changelog
              Right, in 2.2 it is, and I wonder whether the Phoronix guys used it in their 2.1.4 test too

              Comment


              • #17
                Wow.

                I just found out that in Fedora 11's version of virt-manager and KVM..

                It now supports Intel's VT-d and AMD's IOMMU chipset-level virtualization.

                That means with proper hardware support KVM/virt-manager has the ability to hand over control of PCI devices directly to the guest virtual machine.

                This means that ethernet, wireless, usb controllers, drive controllers, graphics cards, and a whole bunch of other devices can be handled over to Windows or Linux running in a virtual machine environment.

                THIS IS COOL. But I don't think I have anything that supports that yet.

                Anybody know the details about this stuff or what hardware supports it? I am having a hard time finding out what chipsets support it.

                Comment


                • #18
                  Were the kvm/qemu "virtio" drivers used in any of the tests?

                  I've been using kvm in production environments for the past 3 months. The "virtio" drivers make a huge difference. I'm wondering if the kvm tests used the default drivers or the paravirtualized "virtio" drivers.

                  Cheers,

                  Alex C.

                  Comment


                  • #19
                    @drag

                    WOW! What a fantastic post!!!!

                    I don't know what to say. It covers everything

                    Thanks a lot

                    Comment


                    • #20
                      thank for your good review and benchmarks.
                      Can I install KVM or virtubox in my desktop which have a processor with out Virtualization technology ?
                      if I can install what is the practical usages of these technologies in the new processors ?

                      Thanks.

                      Comment

                      Working...
                      X