Announcement

Collapse
No announcement yet.

KVM Virtualization Performance With Linux 2.6.31

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by SEJeff View Post
    How can you test the IO performance of a virtualized Linux host and not use virtio? It is akin to testing the top speed of a porsche on the spare doughnut... It makes 0 sense.

    Can you do some honest benchmarking and use virtio? These numbers are completely ignorant at best. If you want good IO on kvm, you use the paravirt drivers.
    I don't agree with you.

    Someone lay person who is examining comparitive results are the target for most of Phoronix articles. The key is "available at arms length" benchmarking, ie: easily available - out of the box.

    If you want to create a recipe that shows KVM to the the best capability, can I suggest the following

    1) Identify all the steps needed for the tuned setup
    2) Create a fully described list of directions for building the tuned setup
    3) Execute the list, keeping track of the length of time to do it.
    4) Repeat the benchmark (Michael can probably supply a PTS test suite to do so)
    5) Post the collection here.

    I'd be interested to see the effort and results.

    Comment


    • #17
      Originally posted by ian.woodstock View Post
      Wrong.
      SQLite didn't run faster under VM than native.
      You're just seeing misleading results because the host was caching.
      The SQLite test was faster under the VM.

      Although I don't have explicit information on the benchmark, it would appear that the VM is doing the operations synchronously (similar to the native execution), but the VM's disk driver is caching and batching the writes.

      This would result in a faster experience under a VM, however you risk data integrity that you don't risk in the native position.

      I would expect that it is a possible bug in the KVM driver stack, however the any SQLite based application would experience the performance boost.

      So this sort of benchmarking does show extreme value and doesn't mislead. It highlights a questionable delta between Host and Guest, and consequently requires deeper examination.

      The net result is that currently, out of the box KVM support is slow for most operations except some CPU bound operations, but even more concerning it potentially risks data by not making synchronous IO operations asynchronous.

      Regards,

      Matthew
      Last edited by mtippett; 09-22-2009, 09:50 PM. Reason: Added "a" to asynchronous in the last sentence.

      Comment


      • #18
        Originally posted by mtippett View Post
        The net result is that currently, out of the box KVM support is slow for most operations except some CPU bound operations, but even more concerning it potentially risks data by not making synchronous IO operations synchronous.
        Seems I interpreted IO wrong then. But yeah, I do agree that KMV is likely slower for most operations. (covering real-life usage that this test was related to only slightly) The virtualization extensions should only really be helpful for the CPU bound operations as far as I've understood.
        I'd have assumed native should cache and batch too though unless you explicitly flush.

        Comment


        • #19
          Originally posted by nanonyme View Post
          Wrong, according to the test SQLite ran magnitudes better under VM that native. Most CPU-intensive tasks were just as fast under VM, most hard-disk-intensive tasks were slower under VM. This would imo hint towards a need to develop better virtualization technology for hard disks.
          If you LD_PRELOAD a library that implements fsync() as a no-op, you can get the same speedup on the host, without virtualization! I suggest you use this handy library someone provided. Heck, it might be better to just patch your glibc and not mess with pesky preloading. Instant speedup!

          You obviously don't know what you're talking about. Read a little. Actual technical stuff, not "benchmarks" done by a monkey trained to run the single-click "suite" without knowing what is measured or what it means.

          For the record, that result just means all their numbers are invalid and completely useless.

          Comment


          • #20
            Originally posted by wolf550e View Post
            You obviously don't know what you're talking about. Read a little. Actual technical stuff, not "benchmarks" done by a monkey trained to run the single-click "suite" without knowing what is measured or what it means.
            That single case was brought up as a peculiarity, not as a recommendation to use SQLite under VM. It kinda jumps up when you've immensively better results in VM than native, that's what I meant with that first bit. It was a single case where VM seemed faster so I picked it up. Naturally native being tons faster than VM is uninteresting (except for CPU-bound tasks) since it's expected. What I continued with was that I intuitively assumed that reading an SQLite database file inside an image file on the hard disk would be slower than just reading a file on the hard disk. (on latter thoughts I'm not even sure if this matters, might not even significantly increase fragmentation)
            Also read my message again: My conclusion was that virtual I/O needs to develop, not that real I/O needs to take example of virtual I/O. Even though the basis for my conclusion was wrong, the conclusion was apparently right.
            Last edited by nanonyme; 09-23-2009, 05:42 AM.

            Comment


            • #21
              Originally posted by leidola View Post
              What about a comparison of different virtual machines, let's say:

              qemu-kvm vs. VirtualBox (with guest additions) vs. VMWare client (with guest additions).

              By the way. Was the virtual harddisk a file or a block device?
              That would be lots of fun with each camp saying you need to tune each host and each guest further to make it a fair comparison . Hell, throw in Xen just for good measure .

              Matt

              Comment


              • #22
                Originally posted by nanonyme View Post
                That single case was brought up as a peculiarity, not as a recommendation to use SQLite under VM. It kinda jumps up when you've immensively better results in VM than native, that's what I meant with that first bit. It was a single case where VM seemed faster so I picked it up. Naturally native being tons faster than VM is uninteresting (except for CPU-bound tasks) since it's expected. What I continued with was that I intuitively assumed that reading an SQLite database file inside an image file on the hard disk would be slower than just reading a file on the hard disk. (on latter thoughts I'm not even sure if this matters, might not even significantly increase fragmentation)
                Also read my message again: My conclusion was that virtual I/O needs to develop, not that real I/O needs to take example of virtual I/O. Even though the basis for my conclusion was wrong, the conclusion was apparently right.
                You've captured my personal interest in benchmarking with this post.

                1) Benchmarking for numerical comparisons is somewhat borderline and interesting
                2) Outliers bear particular interest and investigation
                3) Investigation identifies areas for improvement/correction/isolation

                PTS has a growing list of lots of these cases, where there is a "disbelief -> anger -> investigation -> acceptance -> change" cycle.

                Matt

                Comment


                • #23
                  For those people who have been on this thread, I dug around with the KVM and Ubuntu qemu-kvm maintainers.

                  It looks like "write-back" caching is turned on by default vs the recommended "write-through".

                  This increases performance and usablity, but it appears that it ultimately ignores requests for synchronous fileIO. Although this is the default configuration of Ubuntu currently, it effectively renders Ubuntu as not suitable for high-reliability workloads.

                  I have raised a defect in launchpad

                  https://bugs.launchpad.net/ubuntu/+s...vm/+bug/437473

                  So the test was useful, and since it was clearly different than the other results, it has borne value by investigating.

                  Michael's assertions within the article are still correct. If you want a default SQLite install to absolutely fly, run it under KVM under Ubuntu. But be aware of the risk to your data - I personally don't expect Michael to invest in the way that I did to understand and present data for each and every unusual result.

                  Regards,

                  Matthew

                  Comment


                  • #24
                    Apart from raising a defect in Ubuntu, I see no interest in this benchmark. Who might see some interest in virtualization but companies? These companies would certainly not waste their forces in a system that is a) not fine tuned for virtualization at its best and b) based on a defect making the system unreliable. Maybe I'm too negative but that's how I see it.

                    Comment


                    • #25
                      Originally posted by VinzC View Post
                      Apart from raising a defect in Ubuntu, I see no interest in this benchmark. Who might see some interest in virtualization but companies? These companies would certainly not waste their forces in a system that is a) not fine tuned for virtualization at its best and b) based on a defect making the system unreliable. Maybe I'm too negative but that's how I see it.
                      I personally use virtualization to run Windows within a window under Linux. I switch between Vista and XP by just starting an image. With virtualbox, you can also access USB which gives you driver access to USB devices.

                      There are other reasons such as "near-disposable images" that you can copy rather than re-install, there is also the experimentation options by installing into a VM too.

                      These is definitely a user-oriented thing rather than corporate.

                      The trigger for the caching policy change was users complaining about performance. So I can't speak for Ubuntu, but there is a reasonably strong pull for them to invest.

                      Regards,

                      Matthew

                      Comment


                      • #26
                        Originally posted by mtippett View Post
                        For those people who have been on this thread, I dug around with the KVM and Ubuntu qemu-kvm maintainers.

                        It looks like "write-back" caching is turned on by default vs the recommended "write-through".

                        This increases performance and usablity, but it appears that it ultimately ignores requests for synchronous fileIO. Although this is the default configuration of Ubuntu currently, it effectively renders Ubuntu as not suitable for high-reliability workloads.

                        I have raised a defect in launchpad

                        https://bugs.launchpad.net/ubuntu/+s...vm/+bug/437473
                        The bug now says the problem should be fixed for the release.

                        So the test was useful, and since it was clearly different than the other results, it has borne value by investigating.

                        Michael's assertions within the article are still correct. If you want a default SQLite install to absolutely fly, run it under KVM under Ubuntu. But be aware of the risk to your data - I personally don't expect Michael to invest in the way that I did to understand and present data for each and every unusual result.

                        Regards,

                        Matthew

                        Comment


                        • #27
                          Originally posted by cowmix View Post
                          The bug now says the problem should be fixed for the release.
                          Well, it is a lot more complex than that.

                          After initial disbelief, the KVM team eventually accepted that it was a current version, reproduced it themselves.

                          http://thread.gmane.org/gmane.comp.e...vm.devel/41353


                          The "closure" of the bug was really just a knee jerk reaction by Anthony because he didn't want to look deeper into the issue because he did not believe it was valid.

                          In the end, cooler heads prevailed, they accepted there was an issue, created and applied the patch

                          http://thread.gmane.org/gmane.comp.e...vm.devel/41592

                          I didn't bother going back to re-open and fight the launchpad bug there.

                          FWIW, the issue is that fdatasync wasn't actually doing anything on the some filesystems, so this now makes sure it does something. As always, if it looks free and too good to be true, it usually is.

                          Lots of other subtle lessons were learnt, but most of those are about human nature and communication.

                          Regards,

                          Matthew

                          Comment


                          • #28
                            Originally posted by mtippett View Post
                            Well, it is a lot more complex than that.

                            After initial disbelief, the KVM team eventually accepted that it was a current version, reproduced it themselves.

                            http://thread.gmane.org/gmane.comp.e...vm.devel/41353


                            The "closure" of the bug was really just a knee jerk reaction by Anthony because he didn't want to look deeper into the issue because he did not believe it was valid.

                            In the end, cooler heads prevailed, they accepted there was an issue, created and applied the patch

                            http://thread.gmane.org/gmane.comp.e...vm.devel/41592

                            I didn't bother going back to re-open and fight the launchpad bug there.

                            FWIW, the issue is that fdatasync wasn't actually doing anything on the some filesystems, so this now makes sure it does something. As always, if it looks free and too good to be true, it usually is.

                            Lots of other subtle lessons were learnt, but most of those are about human nature and communication.

                            Regards,

                            Matthew
                            So will the fix make it into the final release of 9.10? I

                            Comment


                            • #29
                              Originally posted by cowmix View Post
                              So will the fix make it into the final release of 9.10? I
                              Not sure, I would say it is probably too late (2 weeks before release is a bit late).

                              It's unclear if Dustin believed it was a concern or not.

                              Comment

                              Working...
                              X