Announcement

Collapse
No announcement yet.

DigitalOcean & Others Still Working On Core Scheduling To Make Hyper Threading Safer

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • DigitalOcean & Others Still Working On Core Scheduling To Make Hyper Threading Safer

    Phoronix: DigitalOcean & Others Still Working On Core Scheduling To Make Hyper Threading Safer

    With vulnerabilities like L1TF and Microarchitectural Data Sampling (MDS) prominently showing the insecurities of Intel Hyper Threading, DigitalOcean and other organizations continue spearheading a core scheduling implementation for Linux that could allow HT to remain enabled but with reducing the security risk...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Holy effing shite. Can we all just agree that trying to save Hyper-Threading is as lost a cause as "Make America Great Again"?

    The more you apply software trickery and alchemy the more you lose the value and performance that Hyper-Threading has promised and failed to deliver all these years. We see now that all HT was, all these years, was a hardware technique to artificially destroy AMD in Intel trumped up benchmarks. Now...it's biting them in the ass...HARD. Now the computing world is going heterogeneous and modular. Gone are the days when we need the CPU to be a Vector Processor AND a DSP AND an FPGA AND a DPU AND an NPU AND etc...etc...etc

    Let THOSE hardware modules take care of highly parallel and multi-threaded instructions. Start making the CPU simpler and less vulnerable.

    Comment


    • #3
      Originally posted by Jumbotron View Post
      Let THOSE hardware modules take care of highly parallel and multi-threaded instructions. Start making the CPU simpler and less vulnerable.
      We evidently need to go back to the days when CPUs didn't have additional instruction sets. Or better... we need to go back to when CPUs didn't even have FPUs, and they were a secondary module which plugged into a completely seperate socket on the board. Or why not go all the way back to pen, paper and abacus? Then the only danger is fire or theft (or losing your pen ).

      Moving complexity to somewhere else in the system doesn't remove potential vulnerabilities - it simply moves them. They might not be quite so obvious, they might be harder (or possibly easier) to exploit. But they will still be there.

      ...

      As the article states, this is a big deal for companies which sell "virtual" CPUs. For everyone else, particularly home users? Not so much. The truly worried will already have HT disabled anyway, and running Linux does (for the most part) allow those individuals who a) care that much and b) have the time to audit the millions of lines of code they will be compiling and running. Or allow the community to do some of it - which they already do - because I can't see the Linux/BSD community remaining silent if malicious code was identified in essential open source code.

      Comment


      • #4
        Originally posted by Jumbotron View Post
        Let THOSE hardware modules take care of highly parallel and multi-threaded instructions. Start making the CPU simpler and less vulnerable.
        Yes, let's make CPUs great again!

        Drain the Intel swamp!

        Now the computing world is going heterogeneous and modular. Gone are the days when we need the CPU to be a Vector Processor AND a DSP AND an FPGA AND a DPU AND an NPU AND etc...etc...etc
        Why you need to post this self-contradictory bs.
        "heterogeneous and modular" literally means that the CPU is now a conglomerate of different processors doing different functions, and that's what "CPUs" are nowadays, as on-die you have A LOT of coprocessors for math, video, crypto and even wifi for newer Intel stuff.

        Comment


        • #5
          Originally posted by Paradigm Shifter View Post
          We evidently need to go back to the days when CPUs didn't have additional instruction sets. Or better... we need to go back to when CPUs didn't even have FPUs, and they were a secondary module which plugged into a completely seperate socket on the board. Or why not go all the way back to pen, paper and abacus? Then the only danger is fire or theft (or losing your pen ).

          Moving complexity to somewhere else in the system doesn't remove potential vulnerabilities - it simply moves them. They might not be quite so obvious, they might be harder (or possibly easier) to exploit. But they will still be there.

          ...
          Your logic is correct. There is a major difference between placing your paper scrolls in an arctic vault or placing it next to the unmaintained ammonium nitrate storage.

          It's important to listen to the experts, not what the market or what line of CPUs the top companies choose to sell. I don't have much connectivity with my local university and typical journalists and social media is extremely bad with these topics. From what I found was that Jim Keller (experts IMO) said to start from scratch every ~5 years. I would love to live in a simple world where everything is perfect and we don't need techniques like SMT or branch prediction to make things fast(er), but we don't live in that world. Generally speaking there's a balance between performance, security, and cost. If people demand performance a company will hire engineers and pay them to design hardware that prioritizes performance. Jumbotron AFAIK there's no inherit problem with SMT, but rather with the way HT was implemented.

          My speculation: Intel has so many vulnerabilities not because it completely disregarded security, but because it did not want to (pay for) redesigning from scratch. Design mistakes happens even when security is relatively important, but the effects are much worse in Intel's due to refusing to redesign over the years and not giving engineers the ability to improve. I would like to hear your take on this.

          Originally posted by Paradigm Shifter View Post
          As the article states, this is a big deal for companies which sell "virtual" CPUs. For everyone else, particularly home users? Not so much. The truly worried will already have HT disabled anyway, and running Linux does (for the most part) allow those individuals who a) care that much and b) have the time to audit the millions of lines of code they will be compiling and running. Or allow the community to do some of it - which they already do - because I can't see the Linux/BSD community remaining silent if malicious code was identified in essential open source code.
          The paranoid/conspiracy theorist in me disagrees with you, especially about this not being a problem for home users. Companies that sell virtual CPUs are potentially liable hence more motivated to solve issues. Modern hacking is all about being undetected and these vulnerabilities allow exploits to go undetected. It's not like an application behind a reverse proxy or some antivirus/selinux/apparmor service that's checking everything that is connecting to your home computer. Majority of home users' traffic is encrypted and are running proprietary software that does not allow network level inspection. If mitigations has flaws it would be an unethical individual or organization's ultimate dream.

          Regarding code auditing and compiling - I agree, people or companies won't remain silent if malicious code was identified. This topic relates to badly designed hardware, arguably there's nothing wrong with the initial Linux code and it would not be a concern if it was audited prior to hardware vulnerabilities' discovery (by Google).

          It makes sense to use specific hardware if you want to manage key generation or run a secure vault service for financial institutions. It's a waste to buy application specific hardware if you want to play games or browse the web and code securely on the same machine. Initial cost is higher, uses more space physically, wastes electricity and maintenance is time wasted etc... One expects to be able to run untrusted software in a VM or standbox environment unfortunately things are not so simple when you have to work around hardware in software.

          Back on topic: I'm really glad for the work that DigitalOcean is doing and the effort they have put in to talk about, solve, and upstream issues that they did not cause. The documentation is also extremely useful! Looking forward to going through it.

          Comment

          Working...
          X