Announcement

Collapse
No announcement yet.

Disabling Spectre V2 Mitigations Is What Can Impair AMD Ryzen 7000 Series Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Guess I am a luddite and proud of it. Security isn't the end all to end all. Security has to be approached from the use case and balanced. I am not part of the NSA community for example, nor do I have anything 'really' important to hide (If I did, I'd encrypt just the important data anyway)... As a home user, I don't need all the mitigations. I'd rather just let the machine be as fast as it can be... Although truth be told, I'm never on the bleeding edge of needing more performance! I don't enable SE (I would in a business environment). Nor to I encrypt my ext4 disks. Yes, common sense things like good passwords, and firewalls are in use. But there is nothing here that requires over the top security other than protecting passwords and shutting down ports into my systems. The sky isn't falling even though some tin-foil paranoid types think so.... Oh, and I've been programming for a living since the 80s. Also been a network admin and wore other hats.... So not 'new' to the game.
    Last edited by rclark; 04 October 2022, 02:34 PM.

    Comment


    • #22
      Originally posted by cj.wijtmans View Post
      keep in mind that CPU by law must have backdoors and vulnerabilities. Same with encryption or in case of court cases/investigations you must give out your password.
      Not in all countries, and for CPU's backdoors may answer only to the country of manufacture. We don't have key disclosure laws in the US,and they don't work when the "crime" for which prosecutors cannot get a conviction without unlocking the disk carries a greater penalty than refusing to disclose a key. An obvious example would be UK prosecutors encountering an encrypted machine they cannot crack (due to no prior access and capturing it turned off) related to a bombing by Real IRA or Continuity IRA. The bombing is a bigger deal than defying key disclosure, so who in the group in question and in their right mind would comply?

      This has already come up in the UK when someone in SHAC (Stop Huntingdons Animal Cruelty-a militant anti vivisection group) publicly defined a key disclosure order and got away with it thanks to the politics of the case. Prosecutors in the US have lost cases due to encryption, which means any CPU backdoors put in NSA et all must get very limited use, have to be absolutely deniable, and thus not usable for the purpose of prosecutions in open court. More for montoring Putin's box to see if he's really planning to throw nukes at Ukraine if you are en ethical spy, or monitoring Global South activists against mining, oil, and gas (to counter not to prosecute) if you are an unethical one.
      Last edited by Luke; 04 October 2022, 07:09 PM. Reason: clarity

      Comment


      • #23
        Originally posted by Luke View Post

        Not in all countries, and for CPU's backdoors may answer only to the country of manufacture. We don't have key disclosure laws in the US,and they don't work when the "crime" for which prosecutors cannot get a conviction without unlocking the disk carries a greater penalty than refusing to disclose a key. An obvious example would be UK prosecutors encountering an encrypted machine they cannot crack (due to no prior access and capturing it turned off) related to a bombing by Real IRA or Continuity IRA. The bombing is a bigger deal than defying key disclosure, so who in the group in question and in their right mind would comply?

        This has already come up in the UK when someone in SHAC (Stop Huntingdons Animal Cruelty-a militant anti vivisection group) publicly defined a key disclosure order and got away with it thanks to the politics of the case. Prosecutors in the US have lost cases due to encryption, which means any CPU backdoors put in NSA et all must get very limited use, have to be absolutely deniable, and thus not usable for the purpose of prosecutions in open court. More for montoring Putin's box to see if he's really planning to throw nukes at Ukraine if you are en ethical spy, or monitoring Global South activists against mining, oil, and gas (to counter not to prosecute) if you are an unethical one.
        And mind you when they do this to target Putin they are not using a run of the mill cpu, they will capture the server in flight and replace parts to put their own customer version with backdoors in, why know this since the Snowden leaks so OP:s ideas here are completely bonkers on many levels.

        Comment


        • #24
          Originally posted by cj.wijtmans View Post
          keep in mind that CPU by law must have backdoors and vulnerabilities. Same with encryption or in case of court cases/investigations you must give out your password.
          which law and country require backdoors? citation needed. Key disclosure is only in a few countries and even in those they are not out of the ordinary, you also have to hand over physical keys in those jurisdictions as well since what the judge orders the judge gets.

          Comment


          • #25
            Originally posted by F.Ultra View Post

            And mind you when they do this to target Putin they are not using a run of the mill cpu, they will capture the server in flight and replace parts to put their own customer version with backdoors in, why know this since the Snowden leaks so OP:s ideas here are completely bonkers on many levels.
            Takes lots of good old fashioned spy work to first get Putin to order a machine and not have someone buy one off a shelf randomly w cash, then get whatever fake name it is ordered under, intercept the shipment possibly from a warehouse already in Russia, and get the custom CPU, custom flashed firmware, or whatever they use installed. Kudos to anyone who managed to pull off all of that. Best way in might not be Putin's box at all, but rather one he talks to in the Russian Embassy in an easier country to access.

            Suposedly the Chinese Embassy in either Washington or a pro-US country elsewhere was stupid enough to order computers for delivery, naturally they got some "customizations" during a pit stop on the way. That does not work against an adversary buying of the shelf w cash. While Putin must have a computer SOMEWHERE, word is he has a manual typewriter in his main office.

            Comment


            • #26
              Ah, ok. This makes perfect sense.

              For the people commenting here who don't know how spectre v2 works: the branch predictor is trained to predict which way code will go, so the CPU can work ahead down that path. Unfortunately, it can be mis-trained to predict a jump to a desired part of the kernel or another application, running code there which leaks data. The mitigation is to wipe all the accumulated training from the predictor every time you switch contexts. You can't really fix this in hardware 100%: the CPU needs to be *told* when you're switching from one context to another.

              It's clear what the AMD engineers have done: they're using these "erasures" to their advantage. Every time one is issued it's also a signal to the CPU that you've switched between applications, and so any predictions made with current training are probably wrong. Taking this into account, they've streamlined the process of dumping and retraining. If I were them, I'd even have implemented a cache, so that when I'm told to "wipe" the predictions, I actually *store* them along with probably a tag indicating where they occurred in the address space. Then when I'm told to wipe the predictions again I can look through the cache and try to find a set of trainings that match the current context I've just switched into.

              Suddenly you don't have to re-train the CPU's predictor every time you jump from one process to another. The OS is conveniently telling the CPU that it needs to save and restore the predictor state, so you come back to each app with the saved predictor state ready for action. It's merely a convenient side-effect that this also mitigates the vulnerability. This kinda turns the IBPB (Indirect Branch Prediction Barrier) instruction into a sort of IBSR (Indirect Branch Save and Restore, not to be confused with Indirect Branch Restricted Speculation).

              I suppose you could potentially run into vulnerabilities with cache collisions or with the OS moving one app into the space of another app, but the probability is probably(?) small.I suppose they could just add an extra knob in the microcode that allows the OS to say "hey, I'm moving this app, re-tag your training" or "hey, I'm terminating this app, forget your training."

              I suppose the only way to make this better would be to allow the microcode to write to your hard drive. :P Then it could save the predictor state there and persist it across reboots. It could even be provided by the compiler or from profiling tools so your CPU would never have to learn it itself. Probably not worth the absolutely minuscule increase in performance though. In fact just reading the filesystem might take longer than the initial training.
              Last edited by Developer12; 04 October 2022, 10:11 PM.

              Comment


              • #27
                Originally posted by Luke View Post

                Takes lots of good old fashioned spy work to first get Putin to order a machine and not have someone buy one off a shelf randomly w cash, then get whatever fake name it is ordered under, intercept the shipment possibly from a warehouse already in Russia, and get the custom CPU, custom flashed firmware, or whatever they use installed. Kudos to anyone who managed to pull off all of that. Best way in might not be Putin's box at all, but rather one he talks to in the Russian Embassy in an easier country to access.

                Suposedly the Chinese Embassy in either Washington or a pro-US country elsewhere was stupid enough to order computers for delivery, naturally they got some "customizations" during a pit stop on the way. That does not work against an adversary buying of the shelf w cash. While Putin must have a computer SOMEWHERE, word is he has a manual typewriter in his main office.
                Putin, or rather the Kremlin, does not buy stuff under false name, no such big government does. Also no one is dumb enough to target Putin directly for something like this, you target a department and those buys machines in the hundreds or thousands. Yes it requires spy work but that is that the Yanks have NSA and CIA for. You don't regulate that Intel and AMD put backdoors into their CPU:s in the west to target Putin when he and Kremlin simply runs sensitive stuff on Baikal (or other Russian developed platforms) instead.

                Targeting him via Intel/AMD is closed so it doesn't matter just how much spying you have to do, you have to do it if you want the access.

                Comment


                • #28
                  Originally posted by Developer12 View Post
                  Ah, ok. This makes perfect sense.

                  For the people commenting here who don't know how spectre v2 works: the branch predictor is trained to predict which way code will go, so the CPU can work ahead down that path. Unfortunately, it can be mis-trained to predict a jump to a desired part of the kernel or another application, running code there which leaks data. The mitigation is to wipe all the accumulated training from the predictor every time you switch contexts. You can't really fix this in hardware 100%: the CPU needs to be *told* when you're switching from one context to another.

                  It's clear what the AMD engineers have done: they're using these "erasures" to their advantage. Every time one is issued it's also a signal to the CPU that you've switched between applications, and so any predictions made with current training are probably wrong. Taking this into account, they've streamlined the process of dumping and retraining. If I were them, I'd even have implemented a cache, so that when I'm told to "wipe" the predictions, I actually *store* them along with probably a tag indicating where they occurred in the address space. Then when I'm told to wipe the predictions again I can look through the cache and try to find a set of trainings that match the current context I've just switched into.

                  Suddenly you don't have to re-train the CPU's predictor every time you jump from one process to another. The OS is conveniently telling the CPU that it needs to save and restore the predictor state, so you come back to each app with the saved predictor state ready for action. It's merely a convenient side-effect that this also mitigates the vulnerability. This kinda turns the IBPB (Indirect Branch Prediction Barrier) instruction into a sort of IBSR (Indirect Branch Save and Restore, not to be confused with Indirect Branch Restricted Speculation).

                  I suppose you could potentially run into vulnerabilities with cache collisions or with the OS moving one app into the space of another app, but the probability is probably(?) small.I suppose they could just add an extra knob in the microcode that allows the OS to say "hey, I'm moving this app, re-tag your training" or "hey, I'm terminating this app, forget your training."

                  I suppose the only way to make this better would be to allow the microcode to write to your hard drive. :P Then it could save the predictor state there and persist it across reboots. It could even be provided by the compiler or from profiling tools so your CPU would never have to learn it itself. Probably not worth the absolutely minuscule increase in performance though. In fact just reading the filesystem might take longer than the initial training.
                  The problem with this theory is that the retpoline or the IBRS is not used when switching between applications, it's used when you make indirect calls. Thousands if not millions of those can and will be done within the same application context/thread.

                  Comment


                  • #29
                    Originally posted by skeevy420 View Post

                    Dammit. Schrodinger's Comment. While I agree with the sentiment, not every PC is internet facing or needs that kind of security. Something like a massive video encoding or graphics rendering farm behind layers of firewalls and security doesn't need or want mitigations enabled, or, rather, didn't traditionally need or want them enabled.
                    The overwhelming majority of people making those comments are using internet connected PCs, about internet-connected PCs.

                    A graphics rendering farm behind "layers of firewalls" is still connected to networks and runs code that will be poorly vetted and is vulnerable to insider threats and.... makes a rather valuable target, as you could learn by asking major studios that have been hacked.

                    APTs these days leverage a vulnerability on some foothold and then pivot internally. Maybe you use some DNS or ARP attacks from the DMZ to get at that render farm, however you do it the fact that its 2 degrees of separation from the internet is just attack implementation details, not some panacea.

                    Security is layers because breach is inevitable. Even airgapped organizations have been compromised in the past, and that gets a lot easier with this overconfident sense of invulnerability that seems to have spread across the community.

                    Comment


                    • #30
                      Originally posted by ll1025 View Post

                      The overwhelming majority of people making those comments are using internet connected PCs, about internet-connected PCs.

                      A graphics rendering farm behind "layers of firewalls" is still connected to networks and runs code that will be poorly vetted and is vulnerable to insider threats and.... makes a rather valuable target, as you could learn by asking major studios that have been hacked.

                      APTs these days leverage a vulnerability on some foothold and then pivot internally. Maybe you use some DNS or ARP attacks from the DMZ to get at that render farm, however you do it the fact that its 2 degrees of separation from the internet is just attack implementation details, not some panacea.

                      Security is layers because breach is inevitable. Even airgapped organizations have been compromised in the past, and that gets a lot easier with this overconfident sense of invulnerability that seems to have spread across the community.
                      Can't say that I disagree with that. Unfortunately, networks are are controlled by humans and those humans listen to capitalistic humans which means they'll have everyone do the bare minimum to get it running and spending money on security is a waste of profits; or they're just 1337, I mean, dumb, and think they know better than everyone else. Forcing security by default is a way to protect the greedy and the stupid from themselves.

                      Still though, I can also recognize the need for running at 100% performance with no mitigating factors slowing me down regardless of the risk; especially for ones with no internet access, ones running scientific studies, etc. There's an ironic joke in all of this about a supercomputer or PC group running environmental studies using 20% more energy due to mitigations.

                      Comment

                      Working...
                      X