Announcement

Collapse
No announcement yet.

KVM Benchmarks On Ubuntu 14.10

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • KVM Benchmarks On Ubuntu 14.10

    Phoronix: KVM Benchmarks On Ubuntu 14.10

    For those wondering about the modern performance cost of using KVM on Ubuntu Linux for virtualizing a guest OS, here are some simple benchmarks comparing Ubuntu 14.10 in its current development stage with the Linux 3.16 versus running the same software stack while virtualized with KVM and using virt-manager.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Re: KVM Benchmarks

    What I would love to see (and what is crying out to be benchmarked against the hypervisor solutions) is container-based virtualization.

    It would be a real eye opener to compare the 30% performance hit of hypervisor virtualization solutions with the 1-2% performance hit of container based virtualization (openvz, lxc). Of course it wouldn't be a fair fight. It would be rather like pitting a cheetah against a turtle, but the contrast really ought to be shown, just to enlighten the community about the great performance available from OS-level virtualization.

    Comment


    • #3
      Originally posted by david_lynch View Post
      What I would love to see (and what is crying out to be benchmarked against the hypervisor solutions) is container-based virtualization.

      It would be a real eye opener to compare the 30% performance hit of hypervisor virtualization solutions with the 1-2% performance hit of container based virtualization (openvz, lxc). Of course it wouldn't be a fair fight. It would be rather like pitting a cheetah against a turtle, but the contrast really ought to be shown, just to enlighten the community about the great performance available from OS-level virtualization.
      where did you see that 30% ?
      biggest is ~24% on the compile bench
      on the other hand the lowest was less then half %

      also
      basically virtualization is a container, but a full one

      Comment


      • #4
        Originally posted by david_lynch View Post
        What I would love to see (and what is crying out to be benchmarked against the hypervisor solutions) is container-based virtualization.

        It would be a real eye opener to compare the 30% performance hit of hypervisor virtualization solutions with the 1-2% performance hit of container based virtualization (openvz, lxc). Of course it wouldn't be a fair fight. It would be rather like pitting a cheetah against a turtle, but the contrast really ought to be shown, just to enlighten the community about the great performance available from OS-level virtualization.
        Well the real problem is right now other than VMware and Virtualbox there's not really an easy user facing way to take advantage of virtualization on linux AFAIK, you have to muck about on the command line for KVM, xen, and the various containers and it's not really integrated with any of the tooling, which is something really big I have to give to PC-BSD. PC-BSD has a rather nice GUI for setting up jails and administering them in a graphical manner (including automated setup of a "ports jail"), and with the latest quarterly update to PC-BSD the App-Cafe integrates with the jail system so that you can install packages either to the base system or into a jail from one centralized location. There's still some rough edges like the shortcuts dropped into the menu system assuming installation on the base system but that'll be ironed out. However the point is that it lowers the barrier of entry enough that desktop/workstation users can actually use and benefit from it as opposed to just reaching for virtualbox whenever we want virtualization.

        Comment


        • #5
          Originally posted by Luke_Wolf View Post
          Well the real problem is right now other than VMware and Virtualbox there's not really an easy user facing way to take advantage of virtualization on linux AFAIK, you have to muck about on the command line for KVM, xen, and the various containers and it's not really integrated with any of the tooling, which is something really big I have to give to PC-BSD. PC-BSD has a rather nice GUI for setting up jails and administering them in a graphical manner (including automated setup of a "ports jail"), and with the latest quarterly update to PC-BSD the App-Cafe integrates with the jail system so that you can install packages either to the base system or into a jail from one centralized location. There's still some rough edges like the shortcuts dropped into the menu system assuming installation on the base system but that'll be ironed out. However the point is that it lowers the barrier of entry enough that desktop/workstation users can actually use and benefit from it as opposed to just reaching for virtualbox whenever we want virtualization.
          BSD Jails, Open-VZ, Solaris zones and other multi-client capability kernels does not compare well to full machine virtualization. Different goals, different solutions.

          Comment


          • #6
            Originally posted by Luke_Wolf View Post
            Well the real problem is right now other than VMware and Virtualbox there's not really an easy user facing way to take advantage of virtualization on linux AFAIK, you have to muck about on the command line for KVM, xen, and the various containers and it's not really integrated with any of the tooling, which is something really big I have to give to PC-BSD. PC-BSD has a rather nice GUI for setting up jails and administering them in a graphical manner (including automated setup of a "ports jail"), and with the latest quarterly update to PC-BSD the App-Cafe integrates with the jail system so that you can install packages either to the base system or into a jail from one centralized location. There's still some rough edges like the shortcuts dropped into the menu system assuming installation on the base system but that'll be ironed out. However the point is that it lowers the barrier of entry enough that desktop/workstation users can actually use and benefit from it as opposed to just reaching for virtualbox whenever we want virtualization.
            Ovirt is the GUI you're looking for. Or, for simpler interface, just virt-manager. No CLI necessary.
            The app-cafe sounds interesting. We'll get transparent containerization before too long, however.

            Comment


            • #7
              Originally posted by dibal View Post
              BSD Jails, Open-VZ, Solaris zones and other multi-client capability kernels does not compare well to full machine virtualization. Different goals, different solutions.
              Oh absolutely but conversely there's a lot of things that containers are good at that full machine virtualization honestly sucks at for doing.

              Say I want to have a stable clean base system to allow for easy upgrade through OS versions in a predictable manner, and then separated from that I wanted to run my applications inside some form of virtualization or another in that cleanly separated manner. Or as another example if I as a developer want to do development inside of some sort of virtualization so that deployment and such can be tested, and I can see how it interacts on a "clean" system.

              These are perfect jobs for a container, however under linux I'd traditionally turn to a hypervisor because my experience with chroots and such has been one of excessive work for me to get up and running vs feeding an iso into a hypervisor and hitting next a few times. The problem is of course that hypervisors are heavy requiring dedicating a certain portion of RAM and usually running up the CPU making it an unattractive solution and so I haven't really bothered.

              With PC-BSD setting up containers is even more trivial than installing an OS into a hypervisor and it gives me nice easy GUI tools to administer it between Warden and App-Cafe. Further by being a container as opposed to a hypervisor there's no real extra overhead placed upon the system in a way that significantly impacts me, which means I'm actually going to use said containers. Obviously if I want to test on different OSes I'd need to pull out a hypervisor but for development purposes a hypervisor is overkill (and too much overhead).

              I'm sticking with openSUSE for the time being but once FreeBSD picks up radeon dynamic powermanagement support, and the centos 6 version of the linuxolator is stable I'm very seriously considering switching over because I really like the direction PC-BSD is heading and having easy container support is a killer feature for me as a developer. We'll see if the linux version of this stuff is ready by that point (probably FreeBSD 11?)

              Comment


              • #8
                Originally posted by liam View Post
                Ovirt is the GUI you're looking for. Or, for simpler interface, just virt-manager. No CLI necessary.
                The app-cafe sounds interesting. We'll get transparent containerization before too long, however.
                Thanks, I'll take a look at that

                Comment


                • #9
                  I use KVM extensively in the lab and in production, and it's really quite simple to manage. Most of the lab boxes have a base images on them, so the other guys have no trouble making clones using virt-manager. The newest versions are actually quite feature full, so you can do all sorts of things like usb-passthrough, adding/removing hw, etcl, so editing xml files is not necessary just for spinning up a few VMs on a box. The only thing that might require CLI work is setting up bridge interfaces, depending on the host distro. You can do it directly in Virt-Manager in many cases.

                  All that said, I still use the CLI a lot for remote boxes. The "virsh" shell is very powerful and easy to understand. Then again, Virt-Manager can connect to remove KVH hosts over SSH.

                  Comment


                  • #10
                    Bench shows what it should!

                    Benchmark shows what it should: in computation loads its >90% of raw hardware power. When it comes to I/O it could get a bit worse. Just as with any other full virtualisation.

                    And I'm sorry to inform BSD lovers, but jails totally suck. In terms of features (ability to set resource quotas/policies, what parts of system are being virtualised, etc) they lose to even bare handed Linux kernel where things like LXC (and docker on top of it, etc) would beat crap out of any BSD. Then if it is not enough, we can see OpenVZ overtook literally all VPS market, even if jails were way before it appeared. And full virtualisation is just another solution. Dude, you see, I can pass-through physical PCI device to VM and VM would attach it as if it was part of virtual machine. Then VM can use its own kernel driver to deal with that device as if it was another physical computer which got that PCI device attached to its bus. This way you can use device in VM and host would have zero knowledge on ho to deal with such device - guest can get total ownership of device, if desired. Good luck to do something like this with containers. Containers are faster but only provide limited isolation and generally can only change user-mode parts of system, but kernel-mode parts are normally shared for whole system. OTOH full VM allows to boot totally different OSes side by side.

                    And generally speaking, BSDs suck in both full virtualisation and containers, so dealing with BSDs is waste of time. They are pathetic losers who implemented jails ages ago... and then lost all market to Linux due to being ignorant selfish nuts with crappy project management.
                    Last edited by 0xBADCODE; 01 August 2014, 12:30 AM.

                    Comment

                    Working...
                    X