Announcement

Collapse
No announcement yet.

An Early Look At The L1 Terminal Fault "L1TF" Performance Impact On Virtual Machines

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Michael, a new version of Xen was released last month that promised significant perf improvements, with many parts of the codebase being completely rewritten.

    I wonder of you plan to do another round of KVM vs. Xen anytime soon?

    Comment


    • #12
      VMware already has some KB entries up including a tool for mitigation and some best practices...



      To summarize these articles "buy more hardware to bypass the performance penalty" *argh*

      Comment


      • #13
        Originally posted by Yoshi View Post
        VMware already has some KB entries up including a tool for mitigation and some best practices...



        To summarize these articles "buy more hardware to bypass the performance penalty" *argh*
        "The initial version of this feature will only schedule the hypervisor and VMs on one logical processor of an Intel Hyperthreading-enabled core. This feature may impose a non-trivial performance impact and is not enabled by default."

        Non trivial indeed. Here's an area that could use quite a bit more digging. If the impact of full mitigation for someone hosting untrusted VMs is as bad as it sounds, this is vastly bad.

        Comment


        • #14
          Victor:

          If you want to have some fun with benchmarking, you can turn SMT/HT on and off on the fly now with fully patched kernels:

          echo "off" | sudo tee /sys/devices/system/cpu/smt/control
          echo "on" | sudo tee /sys/devices/system/cpu/smt/control

          In various bits of benchmarking I've done I see performance impact change wildly. Every where from bad (20%+) to actually beneficial (10%+ improvement).
          The performance impacts of Hyperthreading is insanely hard to rationalise about, and it varies drastically depending on architecture. It's leveraging the unused bits of a core, which means depending on how good a job the branch predictor is doing, and depending on what the workload is, if both threads are working on the same thing, it might not provide much value. If the threads are working on other things, you might gain in better utilisation of the components in the chip itself BUT you lose out because caches will be flushing a bunch.

          Comment


          • #15
            what do i do to disable this?

            Comment


            • #16
              Michael , did you use the 20180807 Intel microcode for these tests? There's been some noise recently about Intel prohibiting benchmarks on this microcode version.

              PS: There are a lot of Michael accounts, is there a way to mention the site owner besides maybe choosing the last Michael in the pop-up list?
              Last edited by GrayShade; 23 August 2018, 07:42 AM.

              Comment

              Working...
              X