Announcement

Collapse
No announcement yet.

Kernel Address Space Isolation Still Baking To Limit Data Leaks From Foreshadow & Co

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Kernel Address Space Isolation Still Baking To Limit Data Leaks From Foreshadow & Co

    Phoronix: Kernel Address Space Isolation Still Baking To Limit Data Leaks From Foreshadow & Co

    In addition to the work being led by DigitalOcean on core scheduling to make Hyper Threading safer in light of security vulnerabilities, IBM and Oracle engineers continue working on Kernel Address Space Isolation to help prevent data leaks during attacks...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    A shortterm partial-solution for these cloud virtual server providers would be to always sell customers an entire core. IE you buy cores in multiples of 2. This way, you never have 2 customers sharing a core and potentially being able to cross privilege boundaries via these HT vulnerabilities.

    Comment


    • #3
      Originally posted by cybertraveler View Post
      A shortterm partial-solution for these cloud virtual server providers would be to always sell customers an entire core. IE you buy cores in multiples of 2. This way, you never have 2 customers sharing a core and potentially being able to cross privilege boundaries via these HT vulnerabilities.
      For that matter, they could still sell one core as long as the two threads they sell stay isolated to just that core.

      From a layman's perspective, seems like running a VM or whatever with taskset or numactl limiting the process to specific threads/cores would be good enough for this/that.

      Comment


      • #4
        Originally posted by skeevy420 View Post

        For that matter, they could still sell one core as long as the two threads they sell stay isolated to just that core.
        Yeah, that's what I mean. It's hard to write about this because I've noticed the term "core" is used by the hardware guys to refer to a physical core (which can run 2 simultanious threads), but it's often used by the server hosting guys to refer to a "thread".

        I guess I could say "sell threads in multiples of 2". "thread" sounds wrong to me though, because that word is often used to refer to process threads, of which you could have potentially thousands running on only a quad core system.

        Is there even an unambiguous word for a core which may be a physical core or may be a hyperthread, fake-core?

        Comment


        • #5
          How about a "pair of execution units"

          Comment


          • #6
            Originally posted by dweigert View Post
            How about a "pair of execution units"
            That just rolls off the tongue /s

            It works though! "execution units".

            Though... make sure you're on the right website when you're ordering "execution units" for yourself or your Earthly experience might come to a sudden, untimely and unexpected end.

            Comment


            • #7
              This is how hyperthreading got started, they bolted on a second execution unit in front of the very deep P4 pipeline. I think they forgot about context switches, because you have to flush the pipeline. and that is a very expensive operation.

              Comment


              • #8
                Typo:

                Originally posted by phoronix View Post
                about at this week's Linux Plumbers Conference in Lisbon, Portgual.
                Se llama "Portugal".

                Comment


                • #9
                  I already wrote it somewhere: faulty hardware should be 1) fixed; 2) called back & replaced.
                  No other solution works, and the whole effort to make workarounds for the faulty hardware that will become obsolete in 5 years anyways is totally fruitless and futile.
                  I clearly understand that this is also an attempt to prevent future vulnerabilities like this from having more impact, but.:
                  1) there are no guarantees new one vulnerabilities will not avoid this mechanism;
                  2) the cost of making such amendments is ultimately high, performance- effort- manageability- complexity- and supportability- wise.

                  Comment

                  Working...
                  X