Announcement

Collapse
No announcement yet.

Opt-In L1 Cache Flushing To Try For Linux 5.15 To Help With The Paranoid, Future CPU Vulnerabilities

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by uid313 View Post

    But I think maybe Rocket Lake, Comet Lake and Alder Lake draws more power than earlier generations. I don't know if Alder Lake even draws more than Rocket Lake and Comet Lake.

    But for me who have an Haswell, if I buy a new CPU then it draws more power than my current CPU, hence it is actually worse in some aspects (even though it is faster)?

    I like to have a fast CPU, but I also like it to run cool, stay cool, and be easy to cool without massive and expensive heatsinks and loud fans. I also want it to use little energy and provide a low TCO.

    Maybe the TCO of a new CPU is higher if I have to pay a higher electricity bill every month.
    1. PL1/PL2 performance limits/modes/watt allowances are temporary and meant to speed up the execution of applications which complete their work fast. For tasks taking a lot of time Intel CPUs respect their TDP figures.
    2. Both can be configured in BIOS - you can actually force Intel CPUs to respect any wattage, even not to exceed their actual TDP.
    3. Currently Zen 3 is the most efficient x86_64 uArch in terms of MT performance and performance per watt metrics. Alder Lake is highly unlikely to change that. I would not recommend purchasing Ryzen 7 5800X - 5600X, 5900X and 5950X are all much much better in terms of thermals. Also, AMD is expected to release Zen3+ this fall or winter - they promise up to a 15% performance uplift in certain games due to a huge L3 cache (the only change vs Zen 3).

    Comment


    • #12
      I think one reason you see Amazon (and others) pushing this is because they run a lot of VPS hosts. Probably all hosting providers are looking for this functionality (Google, Linode, Microsoft, etc). Right now pretty much all cloud/VPS are at high risk from CPU vulnerabilities. This allows any random entity to spin up a VPS then attack other VPS on the same host or even the host itself. It's something that causes me to lose sleep wondering about my own servers.

      The CPU vulnerabilities have nearly destroyed any benefit of virtual machines.
      Last edited by linner; 30 August 2021, 04:12 PM.

      Comment


      • #13
        Originally posted by Azrael5 View Post
        Is it possible to realize CPUs without caches?
        Memory takes many clock cycles to fetch a line of data, where as the CPU cache can respond instantly. For data currently being worked on, this makes the chip FAR faster. It's also theoretically possible that a tightly-coupled cache could serve multiple pieces of data at once to different parts of the CPU. Dunno if anyone does that yet.

        Meanwhile, the caches are but one kind of side channel. I fully expect that others will be found. One of the issues with hyperthreading is that the two sibling threads can interfere with each other by competing for hardware resources. While you could use this to tell what the other thread is doing, it's also possible for them to cooperate to leak data from a speculative thread on one hyperthread to a regular thread on the other.

        In theory if we could stamp out all of the possible side channels that speculative execution could potentially use to leak information out of the speculative world, then we could do away with speculation and memory barriers and freely speculate as much as we want. Perhaps some day it will be the only way to get more performance (parallelizing the unparalleizable).

        However, I am VERY doubtful that we'll ever get all possible means of exchanging information. I'm not even sure the problem is logically tractable. It's possible the mere existence of a speedup due to the use of speculation itself can inherently be used to leak data.

        Comment


        • #14
          Originally posted by jayN View Post
          I see that disabling hyperthreading is recommended as a fix for at least some of the related attacks. Does that mean that running on the Alder Lake efficient cores (Gracemenont) would allow bypassing L1 data cache flushes?
          These two mitigations are for different vulnerabilities, which work through different mechanisms. There are many, many variants of spectre, which work by exploiting many different oversights in the design of modern CPUs. It's real swiss cheese.

          Comment


          • #15
            Originally posted by linner View Post
            I think one reason you see Amazon (and others) pushing this is because they run a lot of VPS hosts. Probably all hosting providers are looking for this functionality (Google, Linode, Microsoft, etc). Right now pretty much all cloud/VPS are at high risk from CPU vulnerabilities. This allows any random entity to spin up a VPS then attack other VPS on the same host or even the host itself. It's something that causes me to lose sleep wondering about my own servers.

            The CPU vulnerabilities have nearly destroyed any benefit of virtual machines.
            I agree with you here. There's many massive financial systems using public clouds, while it's still better than having the server under the sysadmin's desk there's a lot left to be desired from AWS, GCP, and Azure. It's about probability and hoping that you are not the one that is going to get attacked rather than using hardware that has less leaks.

            The pull request and this article seems to throw paranoia around a lot. I don't agree with using that term. There are scientific papers published on this topic. It's not so much about emotion/fear but rather proven low level hardware problems. Linus complained about the implementation calling it stupid and saying it's pseudo-security, he did not say the concept was flawed or that only paranoid people would want to use this config option. My question is why are the cloud providers responsible to improve this situation?

            Comment


            • #16
              Originally posted by Azrael5 View Post
              Is it possible to realize CPUs without caches?
              Yup, and a 5Ghz CPU without Cache is just as fast as a 50Mhz CPU without cache. You can have a safe CPU without branch prediction exploits and still have cache, but precision is computationally expensive and people either don't know how to design precise CPUs anymore or most people are willing to be slightly imprecise for more speed. I've heard x86 has a bug with imprecise polynomial maths dating back to the 90's that modern computers can't do right, but your wetware computer could with a pencil and paper. If it can't do basic maths right, how can you trust it to do advanced maths?


              I think it's possible to fix most of the cache missfires, but CPU performance will be cut in half or even down to 25% normal speed.


              But it's stuff like this that make me want to study more computer engineering, I've looked at the fastest cacheless gaming platform and that's the Sega Saturn and most games run at half the potential of the sega saturn because it has 2 CPUs and the CPUs are at about half the maximum clockrate of a cachless system.


              You want a cacheless system, design something inspired by the Sega Saturn and maybe add more cores and VDPs.

              Comment


              • #17
                Originally posted by avem View Post

                1. PL1/PL2 performance limits/modes/watt allowances are temporary and meant to speed up the execution of applications which complete their work fast. For tasks taking a lot of time Intel CPUs respect their TDP figures.
                2. Both can be configured in BIOS - you can actually force Intel CPUs to respect any wattage, even not to exceed their actual TDP.
                3. Currently Zen 3 is the most efficient x86_64 uArch in terms of MT performance and performance per watt metrics. Alder Lake is highly unlikely to change that. I would not recommend purchasing Ryzen 7 5800X - 5600X, 5900X and 5950X are all much much better in terms of thermals. Also, AMD is expected to release Zen3+ this fall or winter - they promise up to a 15% performance uplift in certain games due to a huge L3 cache (the only change vs Zen 3).
                If your interest in power consumption is motivated by reducing your electric bill, keep in mind Ryzen's idle power is kind of bad. The perf/W will win out over Intel if you're running at full tilt all the time, of course.

                Comment


                • #18
                  Originally posted by yump View Post

                  If your interest in power consumption is motivated by reducing your electric bill, keep in mind Ryzen's idle power is kind of bad. The perf/W will win out over Intel if you're running at full tilt all the time, of course.
                  Yeah, that's totally true and sad, unfortunately. My Ryzen 7 5800X CPU idles at staggering ~20W. I'm not OC'ing it.

                  Comment

                  Working...
                  X