Announcement

Collapse
No announcement yet.

The Peculiar State Of CPU Security Mitigation Performance On Intel Tiger Lake

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by curfew View Post
    There is nothing peculiar about this. You are benchmarking on an ultrabook that will engage in aggressive thermal throttling. Thermal (and power) throttling is the answer. Jesus christ, two pages of bullshit already and nobody has suggested the most obvious and correct reason.
    it's almost as if you don't believe that Michael doesn't know how to control variables like this - imagine, a reviewer of things for so many years just not understanding how to benchmark something!

    Comment


    • #22
      Michael can you please do one with Zen 3? I've seen an article where the they were faster with mitigations enabled (on Windows, though).

      For anyone wondering, this is what the kernel reports (with them disabled):

      Code:
      /sys/devices/system/cpu/vulnerabilities/itlb_multihit:Not affected
      /sys/devices/system/cpu/vulnerabilities/l1tf:Not affected
      /sys/devices/system/cpu/vulnerabilities/mds:Not affected
      /sys/devices/system/cpu/vulnerabilities/meltdown:Not affected
      /sys/devices/system/cpu/vulnerabilities/spec_store_bypass:Vulnerable
      /sys/devices/system/cpu/vulnerabilities/spectre_v1:Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
      /sys/devices/system/cpu/vulnerabilities/spectre_v2:Vulnerable, IBPB: disabled, STIBP: disabled
      /sys/devices/system/cpu/vulnerabilities/srbds:Not affected
      /sys/devices/system/cpu/vulnerabilities/tsx_async_abort:Not affected

      Comment


      • #23
        Originally posted by Michael View Post

        Possible but seemingly unlikely. The "mitigations=off" should bypass all mitigations controllable by the kernel -- hardware optimized or not. All the relevant bits were correctly reported as "Vulnerable" via sysfs when the change was made.
        1) Some mitigations are built-in as GCC compiler flags and it's impossible to disable them in run time unless you recompile the kernel.
        2) Some mitigations exist as CPU firmware and again not affected by kernel parameters.
        3) Mitigations in software still remain.

        So, to disable all the mitigations, you have at the very least recompile the kernel and userspace. There's no pre-vulnerabilities firmware for TGL, so you cannot avoid this part.

        Comment


        • #24
          Originally posted by boxie View Post

          So your hypothesis is: mitigations=on == more power/heat/higher clocks?
          seems like a valid way around kernel cpu flaw mitigations!

          Michael this should be an easy one to include in your benchmarks
          You can turn the equation around a little.
          mitigations=on == more cycles spend in mitigations == lower performance.
          Or in other terms, your "performance per watt" goes down as the mitigations go up.
          The power usage stays the same.

          I think he did run benchmarks in the past that also included power usage. To me the current benchmark type is clear.

          Comment


          • #25
            The only useful point of the mitigations=off is to improve performance, so if this regression is confirmed the kernel should not disable mitigations (or at least the ones that regress) on such systems.

            If one still would want to disable some mitigation (e.g for debug purpose) it could force using a specific mitigation option.

            Comment


            • #26
              Originally posted by Dr_ST View Post
              Waiting for the IT department for allowing me to buy AMD in data center instead of Intel. The once "ever" wintel king is dead.
              For an IT department, they sure sound slow. The superiority of EPYC has been published for years now. We did a server refresh last year, all Dell R7415 EPYC servers. These single-socket EPYC servers replaced quad socket Xeon's (Dell R810). Comparing cost, going with AMD saved us over $100,000 per rack on hardware, and another $200,000 per rack on software licensing (enterprise software licensed per socket). It was really a no-brainer, not sure why any IT department would still be choosing intel in 2020.

              Comment


              • #27
                Originally posted by Qaridarium View Post

                i will never buy intel products thats for sure but this really looks amazing.

                this is proof that the linux kernel should be much more strict in forcing all kind of security mitigations.

                because in the end the companies who want sell cpus in the end will bring amazing performance even if we force any kind of security mitigations.
                Will the linux foundation be buying everyone new CPUs to go with it?

                Comment


                • #28
                  To me, it looks like another cheat buried somewhere, like the one that caused the Meltdown to be possible (still not exactly known how it is there).
                  Could be some specific mitigation sequence 'optimization' or CPU turning off something in firmware (microcode) when mitigations are detected.
                  Last edited by Alex/AT; 28 November 2020, 12:29 PM.

                  Comment


                  • #29
                    Originally posted by torsionbar28 View Post
                    For an IT department, they sure sound slow. The superiority of EPYC has been published for years now. We did a server refresh last year, all Dell R7415 EPYC servers. These single-socket EPYC servers replaced quad socket Xeon's (Dell R810). Comparing cost, going with AMD saved us over $100,000 per rack on hardware, and another $200,000 per rack on software licensing (enterprise software licensed per socket). It was really a no-brainer, not sure why any IT department would still be choosing intel in 2020.
                    Most It departments these days are very conservative. In your scenario, assuming you're telling the truth, then your IT department needs a someone that understands basic business, because they didn't save any money, they actually wasted money.

                    If your IT department had chosen to go with EPYC based servers instead of quad socket Xeons, then yes the scenario you described would have saved your company money. But you described spending the cash on the quad socket Xeon and software and then adding to that but buying new EPYC based servers and more software licenses. Your company didn't save any money, they just added to what they had already spent.

                    Comment


                    • #30
                      Originally posted by sophisticles View Post
                      Most It departments these days are very conservative.
                      Who told you that???? LOL!! The Covid Pandemic has driven enterprise IT spending through the roof, most IT departments in 2020 were spending money like crazy to support all the new telework and remote access requirements. Tech stocks were literally the #1 sector for 2020, by a wide margin, due to all the crazy IT spending. Do you not watch the stock market?? You missed out big time!

                      Originally posted by sophisticles View Post
                      In your scenario, assuming you're telling the truth, then your IT department needs a someone that understands basic business, because they didn't save any money, they actually wasted money.

                      If your IT department had chosen to go with EPYC based servers instead of quad socket Xeons, then yes the scenario you described would have saved your company money. But you described spending the cash on the quad socket Xeon and software and then adding to that but buying new EPYC based servers and more software licenses. Your company didn't save any money, they just added to what they had already spent.
                      Uh, no, just no. You're overthinking this. Or maybe you don't understand how enterprise software licensing works? Or IT department tech refresh cycles? Or enterprise hardware support agreements?
                      • We priced out new servers to replace the old ones, as the old ones (R810) were EOL. Dell R7415's with AMD EPYC processors, and R740's with intel Xeon processors. For a similar level of performance, based on core count, frequency, and published benchmarks, a 42 U rack full of R740's costs more than $100,000 *more* than a 42 U rack full of R7415's, at least with the processor and options we selected. Ergo, we saved over $100,000 by selecting AMD EPYC powered servers rather than similarly spec'd intel Xeon servers.
                      • Since it sounds like you're unfamiliar with enterprise IT, most enterprise software is licensed annually, to include the technical support contract. On many enterprise software suites, the licensing costs are per core, or per socket. Our software is licensed per socket. An R7415 is a single socket server. An R740 with intel Xeon with the same core count and memory footprint requires dual-socket. Ergo, the cost per annum of our enterprise software is literally cut in half by selecting AMD EPYC over intel Xeon for our recent tech refresh.
                      Again, there is no "added to what they already spent" when the old hardware has been fully depreciated financially, and is EOL from the vendor. With enterprise hardware, the vendor support costs go up every year, to where supporting old equipment costs MORE each year than buying new equipment would. So enterprise IT throws out the old and buys new every few years, to keep their costs down. And the software is not purchased like consumer software is, it is essentially "leased" by paying an annual "subscription" for licensing and technical support. FYI Enterprise IT works a whole lot differently than the consumer peecee market.
                      Last edited by torsionbar28; 29 November 2020, 12:36 AM.

                      Comment

                      Working...
                      X