Announcement

Collapse
No announcement yet.

The Performance Hit For A Xeon-Backed Ubuntu Linux VM With L1TF / Foreshadow Patches

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by RussianNeuroMancer View Post

    Also, please compare it with AMD (from pre-Spectre to patched kernel with full L1TE mitigation).
    It may be horrible because of high limit of performance drop estimation for intel

    Intel CPUs contains 15 Spectre like errors confirmed
    AMD CPU contains 4 Spectre like errors confirmed and 4 that are not confirmed but yet not disconfrimed

    Lets assume worst case performance drop of 30% on average

    then final performance level is down to 0.7^15 = 0.0047 times of original performance or performance drop up to 99.53%. for Intel.

    or going performance go back to performance level
    log 200/log 2 = 7.6438 years ago If we assume devolved "performance version" of Moore's law.


    For AMD
    0.7^4=0.240 so 76% performance drop at most.


    I know that maximum performance drop for every bug at same time is highly improbable and impossible in real world applications.

    Comment


    • #12
      Originally posted by dsmithhfx View Post
      And I would like to see instructions for disabling *ALL* the patches. I do not purchase cpus so I can stick crippleware on them as some weird kind of money and time -wasting amusement (and no, I am not running an internet-facing server).
      And don't forget: We have mitigation in _all_ devel stuff (gcc, LLVM, etc.) already which we can't turn off.

      Comment


      • #13
        Originally posted by nuetzel View Post
        We have mitigation in _all_ devel stuff (gcc, LLVM, etc.) already which we can't turn off.
        Which mitigation cannot be turned off?

        I'm aware of folks who run their HPC cluster on Gentoo, and turn off all performance degrading stuff that makes no sense for them.
        ASLR, PIC/PIE, stack-protector, you name it.

        Comment


        • #14
          Originally posted by chithanh View Post
          Which mitigation cannot be turned off?

          I'm aware of folks who run their HPC cluster on Gentoo, and turn off all performance degrading stuff that makes no sense for them.
          ASLR, PIC/PIE, stack-protector, you name it.
          Ah, you suggest that Michael should self compile all that stuff for his own to 'test' (benchmark) this?

          Comment


          • #15
            Originally posted by nuetzel View Post

            Ah, you suggest that Michael should self compile all that stuff for his own to 'test' (benchmark) this?
            No he replied to your "that we can't turn off". So if you don't want to be affected by that particular performance penalty then you can compile it yourself and disable those flags.

            Comment


            • #16
              Originally posted by F.Ultra View Post
              you can compile it yourself and disable those flags.
              Get real.

              Comment


              • #17
                Originally posted by dsmithhfx View Post

                Get real.
                "./configure && make && sudo make install" is now to complicated for people who want to squeeze out maximum performance out of every package?

                Comment


                • #18
                  Originally posted by nuetzel View Post
                  Originally posted by chithanh View Post
                  Originally posted by nuetzel View Post
                  And don't forget: We have mitigation in _all_ devel stuff (gcc, LLVM, etc.) already which we can't turn off.
                  Which mitigation cannot be turned off?

                  I'm aware of folks who run their HPC cluster on Gentoo, and turn off all performance degrading stuff that makes no sense for them.
                  ASLR, PIC/PIE, stack-protector, you name it.
                  Ah, you suggest that Michael should self compile all that stuff for his own to 'test' (benchmark) this?
                  No. I am saying that he can turn performance degrading security stuff off, which is contrary to what you claimed. He should of course not turn it off, because the default configuration is representative to what performance users are going to see.

                  In some cases it is a kernel boot parameter or writing the desired value to /proc/sys/kernel/randomize_va_space or similar. In other cases it requires recompilation of things. But let me know what particular mitigation cannot be turned off if you still stand by your statement.

                  Comment


                  • #19
                    Am I reading this correctly?

                    "40 cores / 80 threads" on the host...
                    And "64 threads" assigned to the guest...
                    and with full mitigation hyper threading isdisabled

                    This may be a dumb question, but in the full mitigation case (no hyperthreading...) was that still assigning "64 threads" to essentially a 40 core/40 thread server?


                    Comment

                    Working...
                    X