Announcement

Collapse
No announcement yet.

Multi-Core Scaling In A KVM Virtualized Environment

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    12 cores?!

    I can't help but I find this benchmark misleading.
    12 threads are not 12 cores, furthermore turbo kicks in if only a few threads are executed.

    Comment


    • #12
      Originally posted by Xanbreon View Post
      I do wonder how much the different generations of hyperthreading hurt or help performance, P4 HT, Atom 330 HT, Atom D510 HT, Early i7 HT (eg 920), and later i7 HT (eg the 960 you have and maybe a 860 as well).
      I can tell you that Atom benefits greatly from HT. It's probably the cpu that most benefits from it actually. I did some tests a while back with Cinebench on an atom and using 2 threads gave about 60% more performance than a single thread IIRC. The same goes for compression.

      Comment


      • #13
        Most of the results are quite expected.
        Lastly, with the x264 media encoding benchmark, with one and two cores enabled the performance was close between the host and guest, but the VT-x virtualized guest began to stray as the core count increased.
        What did you think, that with KVM x264 somehow scales better then without? When the difference is 2fps for 1 core, its somehow expected to be at 8fps for 4 cores.

        I also looked at the "TTSIOD 3D Renderer" and the Graphicsmagick Resizing benchmark. Those don't scale very well on the host.
        With basic statistics, I made a graph of what I expected KVM to scale:

        The situation seems more complex then what I assumed, but it is expected the graph to get down again, when the application doesn't scale well on the host.
        (In the table, the KVM values are calculated, expect for the first one. I guessed all other values based on the graphs.)

        Comment


        • #14
          Originally posted by mtippett View Post
          I liked the results, the historic statement that virtualization doesn't work with multiple CPUs has now been reduced to "for some workloads" it collapses at some point.
          Hmm, I'm not sure you can conclude that, based on these test results. When I read the test results, I came to a different conclusion: When increasing the number of cores dedicated to virtual guests, you're increasing the overhead of scheduling and handling of the virtual guest(s) on the hostside. With *zero* cores available/dedicated to the host system, everything slows down when adding more cores to the guests and ignoring the host.

          It would be interesting to see if these bad-performing tests, shows the same curve if the tests is run with 5 cores for the guest + 1 for the host and 11 cores the guest + 1 for the host. If they do not share the same curve, the conclusion should be "don't forget to dedicate some resources to the host" and not "for some workloads virtualization collapses at some point".

          Comment


          • #15
            Another interesting thing is that in most cases, a benchmark that scaled nicely to six cores also benefited from enabling HT. (In other words, it looks like many of the benchmarks where HT decreased performance would also have done badly with twelve physical cores.)

            Comment


            • #16
              It was a very good test, Michael.

              Thank you for disproving me!

              Comment


              • #17
                Typically in virtualization software, the guest operates on 'virtual CPU cores'. These virtual CPUs are (depending on the hypervisor) treated as threads which the hypervisor can schedule. Depending on the workload this scheduling can have bad results (the hypervisor scheduler can fight with the guest OS scheduler). Expect issues during high load. A solution is to use what usually called 'cpu pinning' which allows you to lock each virtual cpu to a specific physical core. This might be something to look into.

                Comment


                • #18
                  how about xen?

                  dear Phoronix,

                  A few years ago, there was a paper about performance comparson between xen, kvm, virtualbox, linux-vserver, openvz. In that paper, kvm does not scale well with multi-core, too. I am not sure if it is the same linux issue.

                  Will Phornix plan to repeat the test with Xen 4.01 to see if it has same problem? Xen alway claims scalability and stability up to 128 cores. If this is still the different between kvm and xen, kvm will have a hard time going to major cloud servers replacing xen.

                  Comment


                  • #19
                    Originally posted by soldcake View Post
                    A few years ago, there was a paper about performance comparson between xen, kvm, virtualbox, linux-vserver, openvz. In that paper, kvm does not scale well with multi-core, too. I am not sure if it is the same linux issue.

                    Will Phornix plan to repeat the test with Xen 4.01 to see if it has same problem? Xen alway claims scalability and stability up to 128 cores. If this is still the different between kvm and xen, kvm will have a hard time going to major cloud servers replacing xen.
                    Phoronix didn't leave any resources for the host operating system in their test, so the results are invalid and you can't use them for anything or make any conclusions based on them.

                    Comment

                    Working...
                    X