Announcement

Collapse
No announcement yet.

New Spectre Variants Discovered By Exploiting Micro-op Caches

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by Duve View Post
    Spectre and Meltdown where always going to come back given that neither Intel or AMD (or anyone else in the realm of performance computing) can engineer themselves out of the problem without take a massive hit to performance. I think that it will be some time before the compute industry at large has any answer(s) to those family of bugs without the sacrifice on the alter of speed.
    By the look of it, I suspect that it will take a set of new architecture to do that.


    Their is something to be said about the lack of diversity within the computer industry in general but that is neither here nor there on this matter.
    The solution is to let the user decide to turn on these performance features or their mitigations or not at the OS level

    Comment


    • #32
      Are ARM, M1, or RISC-V affected?

      Comment


      • #33
        This will be a neverending story, until formal verification and/or algebraic construction will be used thorough the design stages. (And even then, you could find holes in the invariants)

        and I bet x86 will never reach that stage, emulating a crappy ISA and memory model makes things ugly and complex if you want performance.

        Comment


        • #34
          This kind of thing is not fixable. Branch predictors are very important for performance, same as cache. There will always be things you can deduce from studying the access latency. The best you can do is introduce some kind of variability and take some degree of performance hit from doing so.

          Comment


          • #35
            Originally posted by cynical View Post
            This kind of thing is not fixable. Branch predictors are very important for performance, same as cache. There will always be things you can deduce from studying the access latency. The best you can do is introduce some kind of variability and take some degree of performance hit from doing so.
            You can define boundaries where this state is flushed/ignored, within a process you cant guard anything reasonable. You only got unsolvable problems if your ISA has to guarantee side-effects (like strong memory ordering).

            Comment


            • #36
              Originally posted by discordian View Post
              This will be a neverending story, until formal verification and/or algebraic construction will be used thorough the design stages. (And even then, you could find holes in the invariants)

              and I bet x86 will never reach that stage, emulating a crappy ISA and memory model makes things ugly and complex if you want performance.
              This is assuming that the hardware implementation of your ideal design is itself perfect. And from what you write (and how you write it), it is highly probable that you already understand that this is not often the case in the real world...

              Comment


              • #37
                Originally posted by ermo View Post

                This is assuming that the hardware implementation of your ideal design is itself perfect. And from what you write (and how you write it), it is highly probable that you already understand that this is not often the case in the real world...
                Algebraic construction could solve that, but that's a mathematical concept, and reality is only a crude approximation of beautiful math. While you arent wrong, formal verification gets you there by definition . Its really time consuming though, and mistakes, of course, can happen anywhere.

                Comment


                • #38
                  Just wanted to say this article was featured in the newest TechLinked video, at time 1:27! Hopefully some viewers decide to check out Phoronix!

                  Comment


                  • #39
                    Originally posted by milkylainen View Post

                    Side channels in CPUs are not tied to any specific ISA. The ISA almost has 0 relevance to this.
                    ARM CPUs of similar complexity are equally likely to suffer such sidechannels.

                    If you prefer you can say that Intel and AMD design teams suck. I'll buy that as an opinion. But not the ISA.
                    Exactly! Anyone saying that ARM is more safe is dreaming. Qualcomm socs have had their fair share of vulnerabilities as well, including this very recent one that impacts 40% (!) of all smartphones: https://www.bleepingcomputer.com/news/security/qualcomm-vulnerability-impacts-nearly-40-percent-of-all-mobile-phones/

                    Comment


                    • #40
                      Originally posted by Vistaus View Post

                      Then you haven't been on tech forums lol. According to people on tech forums, AMD is the holy grail and they can do anything Intel can't and never make mistakes. So surely they will be able to fix Spectre and Meltdown without performance hits. I mean… right?
                      I am well aware of that "hype" and excitement. Frankly, I always saw it as a little dishonest when you look at AMD the company. They are very much aware that they dodged a bullet and that it would hit them sooner or latter.

                      Originally posted by chithanh View Post
                      You don't need a new architecture. Intel already has an architecture that is largely immune to Spectre, namely IA64. The Itanium's EPIC (not to be confused with EPYC) model moves the complexity which led to Spectre from the CPU into the compiler instead.
                      Off loading the problem to the compiler doesn't make it go away.
                      And last that I head of Itanium, it just got cut off of life-support.

                      Comment

                      Working...
                      X