Announcement

Collapse
No announcement yet.

Linux Will Keep Core Scheduling Disabled By Default

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Linux Will Keep Core Scheduling Disabled By Default

    Phoronix: Linux Will Keep Core Scheduling Disabled By Default

    Among the many new features that were sent in so far this week for the Linux 5.14 merge window was the long in-development work on "core scheduling" to reduce the Hyper Threading information leakage risks from side channels and help ensuring deterministic performance on such HT/SMT systems by controlling the resources that can run on a sibling thread. As a follow-up to that article from a few days ago, core scheduling will now be disabled by default...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Again though the primary driver for core scheduling are the big cloud providers in wanting to try to ensure SMT/HT is safe given the vulnerabilities in recent years and not wanting to have to disable it due to then seeing a sharp drop in their virtual CPU "vCPU" count they can offer to customers on a per server basis.
    Can't say that I blame them. No one wants to lose 50% of what they offer.

    I don't see why the average desktop would need this.

    Comment


    • #3
      It's not even so much the lack of trusted/untrusted workloads. You may not want javascript for two different browser tabs sharing a core together, if one is rendering your bank page and another is from an ad.

      The bigger problem is that for core scheduling to work, every task that potentially can't share a core with another task has to be tagged and that's something that's difficult/impossible to do automatically and a regular PC. Hyperscalars however, know exactly what workloads are what, and are already used to breaking them into groups with containers.

      Comment


      • #4
        > For those not running a mix of trusted/untrusted workloads on your systems, core scheduling won't be of much use.

        I'm not sure about that. Aside from the JS case Developer12 mentions (though NoScript etc are what you should be using there in the first place, and you should have a different profile for banking-type stuff anyway), it may be that the implicitly-stronger core *affinities* of it are actually a desirable benefit. Would have to try it out to see for sure, and it'll be tiny at best, but it's still potentially a non-zero positive.

        Comment


        • #5
          A quick side note:

          Originally posted by arQon View Post
          > you should have a different profile for banking-type stuff anyway
          This doesn't actually matter. Data can be stolen from any userspace process, by any userspace process. Hypothetically, memory containing (eg) textual content could be read out by a page in firefox from a private tab in chrome. In theory it could even lift it from your accounting software.

          As for no-script, it's a cute solution but it breaks huge swaths of the web. Modern sites rely heavily on the ability to run clientside code in the form of javascript and disabling it isn't an acceptable solution for the majority of internet users.

          Comment


          • #6
            Again though the primary driver for core scheduling are the big cloud providers in wanting to try to ensure SMT/HT is safe given the vulnerabilities in recent years and not wanting to have to disable it due to then seeing a sharp drop in their virtual CPU "vCPU" count they can offer to customers on a per server basis.
            Originally posted by skeevy420 View Post
            Can't say that I blame them. No one wants to lose 50% of what they offer.
            TL;DR Cloud providers would loose much less than 50% by disabling HT/SMT, but yes it's still a major loss for them.

            I mostly use t3a.large/t4g.xlarge nodes (some C5a/R5a/m5a/T3 in the mix). While I don't spend nearly enough time to do performance testing to perfectly optimize the node configurations nor do I consider myself I cloud expert, there's something that many customers do not know or understand when it comes to cloud vCPU performance: https://docs.aws.amazon.com/AWSEC2/l...-concepts.html . This system has undocumented features too which requires you to monitor your credit and destroy nodes that are idle and not building up credits. The system's rules have also changed a lot over the years too. If you factor in these over complicated performance restrictions then you will see how little performance you actually get out of a "vCPU". In many cases three vCPUs is worse than one real CPU core even if HT/SMT is disabled. Still the convenience in terms maintenance and ease of deployment that cloud providers offer makes it worth the insanely overhead cost of vCPUs for most of my team's needs. CI is usually the area where the most pain is experienced. For example spending big bucks to prevent builds/tests from running slower in the cloud than on a developer's laptop or in some extreme cases to use hacks to cache/speed up CI. HBD !!!

            What I would like to know is why are all the major cloud providers letting their customers take the hit for a problem with their hardware suppliers? Cloud provider customers legally can't do anything about the increased costs and lowered performance. At least that's how I understand the law in USA (a country that I do not live in). Cloud provider customers simply have to up the costs of their services which their customers have no choice but the pay and they can't do anything about it either. You can see where I'm going with this...

            Originally posted by skeevy420 View Post
            I don't see why the average desktop would need this.
            +1 I am happy with Linus' choice too.

            Comment


            • #7
              Finally!

              Comment


              • #8
                Originally posted by Developer12 View Post
                This doesn't actually matter. Data can be stolen from any userspace process, by any userspace process. Hypothetically, memory containing (eg) textual content could be read out by a page in firefox from a private tab in chrome. In theory it could even lift it from your accounting software.
                Yes, I'm aware. The point is, anyone who is naive enough to use insecure tools and make it as easy as possible for bad behavior to bite them (i.e. the example you gave) is infinitely more likely to get screwed by the various side-channel attacks than they would be if core-pinning was in place. Although, as you say, anyone stupid enough to use Chrome is screwed anyway, unless they're working from an "empty" browser session.

                > As for no-script, it's a cute solution but it breaks huge swaths of the web. Modern sites rely heavily on the ability to run clientside code in the form of javascript and disabling it isn't an acceptable solution for the majority of internet users.

                Then f**k those sites. Especially if you're *doing something sensitive like online banking in the first place*. It's really not that hard...

                While I understand your position, it basically boils down to "You should try to make things as easy as possible for leakage to happen, so that when it inevitably DOES happen you can falsely claim that there was nothing you could have done about it". And that's simply not true. There are a huge number of ways that those risks can be reduced or eliminated entirely.

                I agree it's outside the scope of knowledge that the average grunt possesses, but that doesn't necessarily mean it's beyond their ABILITY to manage: and people like you and I, who HAVE that knowledge already, should be helping them do exactly that (assuming we care about them), by teaching them, or setting up their machines competently for them, and so on. You're not arguing that the problem is actually unfixable, you're just saying "Meh, I can't be bothered to fix it". Although the end result may be the same, those are conceptually two very different ways of reaching it - and only leaves the pretense that their isn't actually a solution, when in fact there is.

                I guess, to put it another way: do you, personally, actually do all your banking etc in Chrome, with 50 other tabs open?

                Comment


                • #9
                  Originally posted by arQon View Post
                  Yes, I'm aware. The point is, anyone who is naive enough to use insecure tools and make it as easy as possible for bad behavior to bite them (i.e. the example you gave) is infinitely more likely to get screwed by the various side-channel attacks than they would be if core-pinning was in place. Although, as you say, anyone stupid enough to use Chrome is screwed anyway, unless they're working from an "empty" browser session.
                  There's nothing special about chrome. Any user-space process can steal data from any other using varied Spectre techniques, just as if every user-space process shared the same address space. Firefox and other web browsers are simply an easy way to get trivial ("sandboxed") RCE through javascript. It's included and run by 99.999% of webpages, whether or not they serve ads (which is 95% of them).

                  Originally posted by arQon View Post
                  Then f**k those sites. Especially if you're *doing something sensitive like online banking in the first place*. It's really not that hard...
                  Sure it is, when your bank uses javascript to make their website function. The bank I use makes it mandatory (as does every other), and there's no way around that no matter how technically savvy you are. Bank websites are but one example, but it's equally true of every website that people use and becomes more true every day. The reality is every user of a computer relies on the web and thus relies on javascript and would be unable to do anything meaningful without it.

                  These vulnerabilities (and the ready availability of effective exploits) are now a fact of life. Nobody should use hyperthreading on a modern Intel computer just as nobody should still be running Windows XP. The only reasonable excuse is if your computer is air-gapped. To do otherwise in either case is to rely only on the fact that you're one target among equally-unprotected many, but that bites really hard if one day you have bad luck.

                  Comment


                  • #10
                    Originally posted by Developer12 View Post
                    There's nothing special about chrome.
                    Oh, I agree - except that what IS special about Chrome is how aggressively hostile it is to anything that might interfere with Google's ad revenue, and the fact that it is *by design* simply "unsafe". UBlock etc can help with that, but the point is that Chrome doesn't just DEFAULT to being inherently untrustworthy (and, as you say, other browsers are no better), it does so with such determination that trying to fix it is basically hammering a square peg into a round hole. You can sort-of do a bad job of it, maybe, but only with a great deal of effort, and the end result is still going to be inferior anyway.

                    > Sure it is, when your bank uses javascript to make their website function.

                    Sorry - I obviously didn't explain things well, and I think that's left you confused wrt what I was talking about.

                    The problem isn't "the JS on the bank site". It's the JS on *every other site*. Setting up FF and NoScript in a default-deny configuration leads to what is unquestionably a "safer" environment for the user. You can't do anything about a "competently-compromised" site, i.e. Magecart etc, but there are plenty of times that even sites that are supposedly well-run (looking at you, newegg...) will be pulling in scripts from 10 or 20 other sites even in their checkout pages. Having all that garbage discarded is, rather implicitly, better than NOT doing so.

                    > Bank websites are but one example, but it's equally true of every website that people use and becomes more true every day. The reality is every user of a computer relies on the web and thus relies on javascript and would be unable to do anything meaningful without it.

                    Except most of that simply isn't true, despite your position.
                    The part that IS true is that as more and more sites are built by more and more incompetent clowns, use of unnecessary JS climbs significantly, I agree. And as time goes by, the scenario you suggest exists today will actually come into being. But it isn't there yet - at least, not for the vast majority of the sites that *I* use on a daily basis. YMMV, and apparently does. My whitelist for NoScript has certainly grown over time from exactly that sort of foundational decay, but it's still under two pages and very manageable.

                    > These vulnerabilities (and the ready availability of effective exploits) are now a fact of life. Nobody should use hyperthreading on a modern Intel computer just as nobody should still be running Windows XP. The only reasonable excuse is if your computer is air-gapped. To do otherwise in either case is to rely only on the fact that you're one target among equally-unprotected many, but that bites really hard if one day you have bad luck.

                    And yet, you're arguing that everyone should run unknown code from absolutely everywhere without making any effort at all to prevent that? I don't see how you square those two arguments. You're literally *supporting* my position, not opposing it. (Hence the "apparent confusion" comment earlier).

                    I do think that, at a minimum, browsers should NEVER allow JS to execute in an unfocused tab without explicit permission - which would be good for battery life etc too, as a benefit. But again, simply not allowing JS to execute in ANY tab without explicit permission is implicity better in the first place. (And the difference in perceived performance on a *slow* conn like my parents' (~3Mb/s ADSL) is night and day).

                    As is nearly always the case, security and convenience are opposed, and especially so in the browser case. But it's at least somewhat a continuum still, not an either/or scenario. If someone knowingly opts for "convenience" though, they don't get to then complain that they HAVE worse security as a result, because that's on them. Likewise, they don't get to claim that things CAN'T be made more secure just because they personally either can't be bothered to make them so, or have actively chosen to make things less secure.

                    Saying that "Nobody should use hyperthreading on Intel", while proclaiming that you will run literally any piece of random code from anywhere in the world, is like worrying about being killed by a meteorite while standing in a warzone. Outside of cloud providers, people are infinitely more likely to get screwed by JS than they are by any side-channel attack.

                    Comment

                    Working...
                    X