Announcement

Collapse
No announcement yet.

x86 Straight-Line Speculation Mitigation On Track For Linux 5.17

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • x86 Straight-Line Speculation Mitigation On Track For Linux 5.17

    Phoronix: x86 Straight-Line Speculation Mitigation On Track For Linux 5.17

    The recent activity around x86 (x86_64 included) straight-line speculation mitigation handling is set to culminate with this security feature being set for mainline with the upcoming Linux 5.17 cycle...

    https://www.phoronix.com/scan.php?pa...ine-Linux-5.17

  • #2
    So what is going on here? I thought only ARM was affected by SLS. Yet in this past month or so there has been a lot of work on implementing it for x86 as well. Have I missed some news? If there was some not yet disclosed vulnerability on x86 with this I would have expected it to be done in secret instead of in the open as it is now...

    Comment


    • #3
      Originally posted by Vorpal View Post
      So what is going on here? I thought only ARM was affected by SLS. Yet in this past month or so there has been a lot of work on implementing it for x86 as well. Have I missed some news? If there was some not yet disclosed vulnerability on x86 with this I would have expected it to be done in secret instead of in the open as it is now...
      It could be slowing down hardware with 'mitigations' was found profitable because of hyped 'speculative execution insecurity' trend.
      It's actually pretty much fun how old urban legends about 'new versions of software being deliberately bloated and slowed down so users will buy more hardware' are actually starting to look less and less absurd.
      Last edited by Alex/AT; 11 December 2021, 01:26 PM.

      Comment


      • #4
        Actually an idea for Phoronix: once all this goes out mainstream, a performance comparison between 'vanilla' distro kernels (i.e. CentOS and Ubuntu) and the same kernels with not just mitigations=off, but recompiled with all the compile-time mitigations options turned off.

        Comment


        • #5
          this bended pins in the picture gives me goosebumps and a chilling feeling of tortured electronics.

          Comment


          • #6
            Originally posted by Alex/AT View Post
            Actually an idea for Phoronix: once all this goes out mainstream, a performance comparison between 'vanilla' distro kernels (i.e. CentOS and Ubuntu) and the same kernels with not just mitigations=off, but recompiled with all the compile-time mitigations options turned off.
            I support this request.

            Comment


            • #7
              Originally posted by Alex/AT View Post
              Actually an idea for Phoronix: once all this goes out mainstream, a performance comparison between 'vanilla' distro kernels (i.e. CentOS and Ubuntu) and the same kernels with not just mitigations=off, but recompiled with all the compile-time mitigations options turned off.
              That is a good idea! However I suspect that the SLS mitigation has minimal impact, since it just adds an extra instruction past an unconditional branch that will never be executed anyway. The only possible impact is a slight reduction in cache locality due to less actual code fitting into the instruction cache.

              Comment


              • #8
                Originally posted by Alex/AT View Post

                It could be slowing down hardware with 'mitigations' was found profitable because of hyped 'speculative execution insecurity' trend.
                It's actually pretty much fun how old urban legends about 'new versions of software being deliberately bloated and slowed down so users will buy more hardware' are actually starting to look less and less absurd.
                There is a classic quote about not attributing to malice what can be explained by incompetence.

                That said, software certainly gets more bloated over time. I remember booting an old version of MacOS 7 in an emulator. Man that UI is super-quick and responsive on modern hardware (even when emulating PPC on x86). Personally I don't care much for fancy aesthetics. That said, such old OSes are missing a lot of features I take for granted these days, such as a searchable start menu/spotlight-like feature. I do feel it should be possible to strike a good balance though. There is probably some Linux WM or DE for that, but I'm okay with Cinnamon currently. Cinnamon runs great on my modern computers, unusable on my old Core 2 Duo Thinkpad, where I use MATE instead. Which is certainly an example of software getting more bloated over time.

                So, back to possible explanations of software bloat. I used to be a software developer (currently doing a PHD instead), so here are some of my thoughts on this:
                • The most important reason is likely that when you don't have some limitations you will naturally not worry about optimizing for said non-existent limitations. What do I mean by this? If I only have say 64K RAM, I have to be super careful to not use too much RAM. If I have double that, I can worry less about that and probably write sloppier code that takes less time to implement. Same goes for CPU speed etc. Optimisation takes time and money that could be spent on other things if the optimisation isn't needed.
                • For commercial software aimed at casual users at least: It seems eye candy sells. Or at least those designing the systems believe that to be the case. This seems to have lessened in recent years, settling on cleaner simplistic aesthetics instead (thank $DEITY), but it was absolutely in full swing up until Windows 7 (remember all the transparency introduced in Vista?). Basically: We have the resources, we can do this extra flashy thing we couldn't before.
                • Some new features genuinely take more resources. I doubt you could have a start menu with search as you type on a typical 486 desktop that felt smooth. Would take too much disk space and RAM to cache. Basically: We have the resources, we can do this extra useful thing we couldn't before.
                • To some extent it is difficult to optimise a single piece of software to scale both up and down. You might need completely different algorithms for the memory constrained case and the hyper scale cloud case. That the Linux kernel actually scales to both of these extremes is in fact very impressive. Sometimes it does this by having alternative implementations in fact (such as the different memory allocators that can be selected at compile time, SLUB, SLAB, and SLOB if I remember correctly).
                Last edited by Vorpal; 11 December 2021, 03:53 PM. Reason: Fix some typos and grammar errors. Add another reason I thought of.

                Comment


                • #9
                  https://mobile.twitter.com/spendergr...35951905292291 : FWIW, spender mentions that non-server Windows 10 has (some) extra "int 3" calls, and probably has had them for some time, so Linux would be late to the party here.

                  Comment


                  • #10
                    Will new CPUs get fixed? It would be silly to carry the mitigation bloat around forever. The mitigations are after all inserted very systematically, which new CPUs could have anticipated, I mean stopped anticipating or speculating beyond, after the same pattern. The pessimist in me wonders if future CPUs will be so optimized for looking ahead of the mitigation code that they would mispredict if you removed the mitigations.

                    The last invulnerable CPU was supposedly the A55 (I have seen no word on A510 yet).
                    Last edited by andreano; 12 December 2021, 06:51 AM.

                    Comment

                    Working...
                    X