Announcement

Collapse
No announcement yet.

Retbleed Impact, Overall CPU Security Mitigation Cost For Intel Xeon E3 v5 Skylake

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by skeevy420 View Post
    like how the PS4 uses x86 for games and ARM for the OS.
    Thats not what this is. The ARM core is an independent CPU with its own RAM for backgroung tasks like downloading updates while in "standby".

    By the way, ARM is also affected by most of those vulnerabilities no need to thrash talk a specific architecture.

    The solution to the problem would be to design a processor that doesn't use any form of concurrency, just a single core reading an instruction, execute it and wait for the instruction to finish. Pipelining would be the only technique for optimizations. Thats an 80486 CPU, we already had that but everyone wanted faster CPUs so https://en.wikipedia.org/wiki/Instru...el_parallelism was added.

    One could maybe make a CPU that has all these optimizations and implements them in a secure way (restricting access to a processes memory in hardware) but performance would suffer especially for multithreaded processes that need inter communication.
    But timing attacks are probably unavoidable if you do more than one thing in parallel.

    Comment


    • #32
      Originally posted by Anux View Post
      Thats not what this is. The ARM core is an independent CPU with its own RAM for backgroung tasks like downloading updates while in "standby".
      I'm just talking in hypotheticals and that's close enough to what I mean.

      By the way, ARM is also affected by most of those vulnerabilities no need to thrash talk a specific architecture.
      I know. That's why I first called them SimpleSecureArch and InsecureArch.

      The solution to the problem would be to design a processor that doesn't use any form of concurrency, just a single core reading an instruction, execute it and wait for the instruction to finish. Pipelining would be the only technique for optimizations. Thats an 80486 CPU, we already had that but everyone wanted faster CPUs so https://en.wikipedia.org/wiki/Instru...el_parallelism was added.

      One could maybe make a CPU that has all these optimizations and implements them in a secure way (restricting access to a processes memory in hardware) but performance would suffer especially for multithreaded processes that need inter communication.
      But timing attacks are probably unavoidable if you do more than one thing in parallel.
      That's why my idea sort of shifted to putting it on a separate card. That and x86 isn't the only thing it applies to. Long-term, the hybrid idea is kind of stupid due to the sheer number of combinations from architectures, generations, and revisions (all past, present, and future). Why not convert hyper-optimized but vulnerable processing units into separate cards that run isolated from the simplified host OS?

      What's your idea on how we get to have our cake and eat it too?

      Comment


      • #33
        Originally posted by skeevy420 View Post
        Why can't we plug in a branch prediction unit much like we plug in a graphics processing unit? A PCIe 16x 2 slot design with an 8c16t Ryzen and 4 slots of ram so your x86 BPU runs isolated from the rest of the OS/system so you don't have to run slow, mitigated code.
        Two main reasons:
        1. The latency would be much too great. PCIe latency is at best around 300 ns -- much worse than DRAM latency -- when CPUs are frequently running at 4 GHz. Having to do a PCIe transaction for each branch would be impossibly slow, not to mention that it would saturate the PCIe bus.
        2. The problem isn't branch prediction per se. The CPU takes actions based on the branch prediction (or any other form of speculative execution) that leave fingerprints. Even though the result of the action taken as a result of failed speculative execution is discarded, it can still have effects that can be measured, such as subtle timing effects.

        Comment


        • #34
          Bottom line is, it should be all off by default, unless paranoid=on is specified.
          Too much loss for risks close to zero.
          Do we limit all car traffic everywhere to 20 km/h for a risk of accidents? No.

          Comment


          • #35
            Originally posted by Alex/AT View Post
            Bottom line is, it should be all off by default, unless paranoid=on is specified.
            Too much loss for risks close to zero.
            Do we limit all car traffic everywhere to 20 km/h for a risk of accidents? No.
            No, but you have to wear safety straps or you'll incur a penalty, you have enough safety equipment that it effects fuel economy and vehicle efficiency, and traffic speeds are limited for safety reasons. That's the reason we have to drive fifty five. Some study showed that driving at 55 was a lot safer than driving at 65 so the speed limit was lowered.

            Not to mention that traffic speeds are lowered in school zones, residential areas, and other high foot traffic areas for safety reasons. A 15MPH School Zone is like getting hit with Spectre mitigations in Java. Video Game Highway is running at an intentionally limited 60FPS...I mean 65MPH...because of traffic laws and me not having a freesync display. I'd be nice to go to Video Game Racetrack where I can go as fast as possible.

            I agree with you, just saying that cars aren't the best analogy for bullshit restrictions limiting performance...unless you have a race car on the track or own your own private roads that don't adhere to normal laws and regulations (mitigations=off).

            Comment


            • #36
              It's a good analogy actually.
              It's wise to leave safety straps on where needed (in critical software like openssl), limit everything where needed (on bank systems), limit speed partially on public roads (shared hostings) but totally **** off from everything else

              Comment


              • #37
                Originally posted by Alex/AT View Post
                Bottom line is, it should be all off by default, unless paranoid=on is specified.
                Too much loss for risks close to zero.
                Do we limit all car traffic everywhere to 20 km/h for a risk of accidents? No.
                Car accidents aren't caused by malicious actors who will seek out the roads with 130 km/h limits and build drones to fly across pulling inflatable dummys made to look like pedestrians. mitigations=off is only safe so long as almost nobody uses it.

                Comment


                • #38
                  Mitigations are a must have on shared servers.

                  Comment


                  • #39
                    Originally posted by cj.wijtmans View Post
                    Mitigations are a must have on shared servers.
                    Depends, actually. As shared servers run a multitude of machines and have mostly random load pattern, timing conditions required for spectre exploits to work are not reachable. Remember all current spectre class attacks don't even work well in single user environments and require special per-machine setup that is definitely unattainable under unpredictable conditions.

                    The only exceptions are meltdown and l1tf that do not require too strict a timing because they both exploit data-dependent cross-privilege cache pollution and then can use very rough timings to detect if it happened. Direct data dependency makes them a bit out of spectre class attacks (in which data does not exactly affect the execution result but is instead attempted to be inferred via side channel fluctuations that are hard to measure).
                    Last edited by Alex/AT; 31 July 2022, 03:42 AM.

                    Comment


                    • #40
                      Originally posted by pWe00Iri3e7Z9lHOX2Qx View Post
                      In this particular case, "buy older stuff" may be a valid option too. Anandtech testing showed that strictly in terms of IPC, Skylake was only 2.7% faster on average than Broadwell (the first 14nm part from Intel). Retbleed only impacts gen 6 through gen 8. So you gain a whopping 2.7% with Skylake then promptly lose 11% due to Retbleed. And those old LGA 2011-3 workstations can be had for cheap.
                      It goes even further back than that, I expect. Haswell was only something like 5-6% better IPC than Ivy Bridge despite the large node shrink, so Skylake would still be a net loss even compared to that. oof.

                      Comment

                      Working...
                      X