Announcement

Collapse
No announcement yet.

Benchmarking The Linux Mitigated Performance For Retbleed: It's Painful

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    It's painfully obvious there is definite room for improvement here.

    Comment


    • #32
      Originally posted by Linuxxx View Post
      Still no definitive answer whether Phantom JMPs are happening on Zen 3 or not.

      bridgman

      I know you are part of the Radeon group @ AMD, but still, could you forward a public statement that answers this question?

      I'm asking because there are people who are planning to upgrade to Zen 3 to escape the RETbleed performance hit, but Phantom JMPs do look like AMD's very own Meltdown-like security disaster, so if Zen 3 has those too, then upgrading to it doesn't seem all that reasonable.
      Please read the attached white paper on the following AMD web site:

      https://amd.com/en/corporate/product...in/amd-sb-1037

      It is clearly explained that there are two issues and matching CVE's, Retbleed and Branch Type Confusion (known as Phantom Jmps). The white paper explains that Zen 3 is not affected by Branch Type Confusion and there is no mitigation required, in addition, we also know from this site that Zen 3 is not affected by Retbleed either. In light of this there is no performance penalty or mitigations of any kind required on Zen 3 for these issues.

      I myself will be moving from a 3950x to a 5950x B2 stepping or maybe a 5950x3D if this gets announced as the latest leaks raise the possibility.

      Comment


      • #33
        I wonder if this first round of mitigation patches is a bit too heavy handed. There have been tweaks to some Spectre patches that alleviated some of the performance impact in the past. Maybe we can hope for something similar here? The performance effect is very severe, it looks about just as bad as meltdown.

        Does anyone have any idea how far back this vulnerability goes? Are, say, HSW and SNB chips affected too?

        Comment


        • #34
          What is the deal with all these CPU vulnerabilities? This never use to happen. These CPUs are gonna be running like a 486 if we continue with this crap. I swear man, chip makers need to fix their junk.

          Comment


          • #35
            Originally posted by wooque View Post
            I'm sticking with mitigations=off as always
            Which doesn't really help if you applied a recent bios update AFAIK. Then the mitigations are applied anyway. Or did I get that wrong?

            Comment


            • #36
              Originally posted by Mike Frett View Post
              What is the deal with all these CPU vulnerabilities? This never use to happen. These CPUs are gonna be running like a 486 if we continue with this crap. I swear man, chip makers need to fix their junk.
              this is exactly why those cpus do not run like 486. things like speculative execution accelerate computing, at a cost of potential security tradeoffs. cpus hit the wall in terms of clock speeds awhile ago, and cpu makers have to invent new ways to be more performant. i worry that many of those modern optimizations will come with similar host of issues.

              secondly, multicore cpu is a whole another bundle of potential problems.

              thirdly - this used to happen. it's just nobody considered seriously looking there before. we had seen some cpu exploits on DEFCON from Christopher Domas, although he mostly exploited legacy flaws of x86 instruction set. we had cpu bugs before that required cpu recalls. it's just they usually did not have security implications.

              also, worth noting is that many of those vulnerabilities affect both intel and amd. seems like there might be flaws in the basic cpu design methodology (or testing) somewhere nobody has yet accounted for.


              so i think it's just the case of increasing complexity of cpus. more potential for things to go wrong.

              Comment


              • #37
                Originally posted by wooque View Post
                I'm sticking with mitigations=off as always
                The horrible part is that is not a good idea unless you are really sure of the software you are running. There are different defects that will happen because you have mittigation off that would not happen otherwise. The mitigation are talked about in security defects but that not the thing mitigation are for.

                Comment


                • #38
                  They put all this holes into the CPUs intentionally to make your CPU slower and slower with every software update to force you to buy a new CPU every 5 years. Without that, ppl would continue to use their old CPU for 10 or more years.

                  Comment


                  • #39
                    Originally posted by Mike Frett View Post
                    What is the deal with all these CPU vulnerabilities? This never use to happen. These CPUs are gonna be running like a 486 if we continue with this crap. I swear man, chip makers need to fix their junk.
                    This is kind of a myth that it never use to happen.
                    https://www.cve.org/About/History

                    Yes we start tracking CVE issues in 1999 the first Intel CPU that is security recorded by CVE is 1999.

                    First thing to be aware being found in 2017 the Spectre fault to now we have seen a stack of speculative execution bugs. What is the first CPU that the spectre fault effects that the Pentium II/2 from 1997. Yes before we in fact start keeping formal records of security faults.

                    So for 2 decades without anyone notice Intel, AMD and other have basically being building on unstable foundation. This explains why when the unstable foundation is found there is a stack fall out. You could really call Spectre to Retbleed speculative execution issues all one single design flaw. Due to how much has been built on top of this design flaw it exposes itself in many different ways.

                    CPU faults have always been around. Most of the time they have been minor things like the CPU just stops working. Yes the 286 had a workaround in motherboard to force reset CPU when particular instructions were performed that locked the 286 up.

                    Just at the moment we are going though a really bad patch. 20 years of development on top of defect that added features on top that defect equals one huge mess with a massive stack of corner cases waiting to catch you out..

                    Yes we have know bugs from Pentium what some people call pentium 1 systems like the F00F.

                    Mike Frett most people are not aware why it was important to get i386 with double sigma (ΣΣ) on the chip if you wanted to run Linux on it back in the time of i386 systems. https://en.wikipedia.org/wiki/I386 Yes without double sigma the i386 could not to 32 bit integer multiplication correctly and that could cause all kinds of problems.

                    8086, 8088, 80186, i486sx and i486dx we don't have records of faults with. But every other x86 chip there is some record of some form of defect that if you are not aware of it it will get you. So cpu faults are horrible common in history.

                    Remember speculative execution faults have not been restricted to x86 cpu only either. There is a old research paper where a person writes up how to do speculative executions from 1993 that is the start point of all these current day speculative execution faults. Yes the Author made a mistake and no one referring that documented noticed it for over two decades. Yes this also explains why the same kind of faults appear in many different CPU instruction sets.

                    Yes we have noticed the problem now. We have to deal with 2 decades of all ready sold silicon. Lot of this silicon still in production usage. Yes doing recall and replacement is not really functional option either.

                    Lets just hope there is not another core design bug hidden and this was just a once in a life time major screw up in CPU design. Yes a lot of people have not got that this speculative execution issue starts in 1997 .

                    Would I be surprised if a few more issues with speculative execution turn up no I would not. Thinking it took CPU vendors 20+ years(1993-2017) to make this mess if we get to the end of it inside 5-6 years (2017-2022/23) we will be doing good. Remember this is basically needing to review 20 years of development looking for how it messed up. Also needing to review that the workaround to the fault for existing silicon are in fact right. Retbleed is a case that that the retpoline workaround is not in fact quite right so not a new silicon fault just failure to understand the silicon fault completely when making the workaround.

                    Comment


                    • #40
                      Originally posted by Desti View Post
                      They put all this holes into the CPUs intentionally to make your CPU slower and slower with every software update to force you to buy a new CPU every 5 years. Without that, ppl would continue to use their old CPU for 10 or more years.
                      That logic does not hold up. Particular when speculative execution faults go back to a 1993 paper written by team of people. Yes the fault was in the CPUs for 20+ years and nobody noticed.

                      Side effect of working around faults is higher CPU overhead so slower performance. I commonly use CPUs for 10 years at time this is called limited usage.

                      Desti civil infrastructure is scary they still have Pentium 2 and 386 and 486 still in production usage. Yes there was wacky in 2015 of a real intel i386sx solder on cpu being used with modern fpga that replaces the old i386 chipset. And at that time that was still using new stock i386sx chips even that production end in 2007.

                      Civil infrastructure like between 20-60 years of in place usage. They also do parts supply contracts with companies like AMD and Intel for at least 15 years. Yes breaking functionality under that not a good idea. The recent screw with speculative execution does not have civil infrastructure uses that happy some have demand protective interface systems for free.

                      Like it or not the speculative execution faults is a stuff up in the form of human error not intentional. In fact AMD and Intel if it was intentional you would have expected something like the double sigma (ΣΣ) branding on i386 on the CPUs so that Civil infrastructure could be told these systems are rated for your usage case so avoiding civil infrastructure users being upset while holding supply contracts for functional and secure product over 10 to 15 years. Yes if there is some special reason that they can say under NDA to civil infrastructure users why X version of a product must be used allows AMD and Intel to charge more for those chips as well.

                      Desti now could you be right in future if we don't watch it yes you could be. But the current events have been nothing more than a very big and costly mistake to AMD and Intel and other CPU vendors. All of the cpu vendors had civil infrastructure supply contracts and all the cpu vendors are having civil infrastructure parties ask for more proof that their products are sound for newer contracts. So hopefully this is a once off. The higher quality control requirements could have the horrible effect of lifting CPU prices over time more than projected.

                      Comment

                      Working...
                      X