Announcement

Collapse
No announcement yet.

AMD With Upstream Linux Nears "The Ultimate Goal Of Confidential Computing"

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • AMD With Upstream Linux Nears "The Ultimate Goal Of Confidential Computing"

    Phoronix: AMD With Upstream Linux Nears "The Ultimate Goal Of Confidential Computing"

    More AMD SEV-SNP bits are upstreamed now for the in-development Linux 6.9 kernel that is putting the EPYC processor support on the mainline kernel trajectory for "the ultimate goal of the AMD confidential computing side" to hopefully be in great shape come Linux 6.10 later in the year...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    There can be no confidential computing when there’s proprietary microcode and proprietary firmware (e.g. for the PSP, GPU, etc). Release the source AMD!

    Comment


    • #3
      Originally posted by mxan View Post
      There can be no confidential computing when there’s proprietary microcode and proprietary firmware (e.g. for the PSP, GPU, etc). Release the source AMD!
      It won't happen even with the source code viewable. The 'ultimate' would be a perfectly black box whose nature cannot be determined from the outside, now or in the future. Inevitably someone, probably the University of the Negev if no one else, will poke at it till they find yet another brilliant side channel that lets people discover the contents of the black box well enough that its confidentiality is compromised. Techies often neglect physical security when they're concentrating so hard on their code. Well between mistakes in code that can lay undiscovered for decades, even with huge numbers of eyes on the code, imperfect hardware, and fallible operators, somewhere you're going to find a way in, and it's just as likely to be an analog (not digital - signals intelligence or social engineering) as it is broken digital assets (bugs or unanticipated side effects) even with the source code being available to audit. "Ultimate" is just marketing dribble.

      Comment


      • #4
        While SEV-SNP is nice, most of the criticism from this blog post still applies. The device models are all still untrusted. And drivers are usually not written with this in mind.
        This post is a continuation of my previous post about Intel TDX. It’s worth a read before reading this post. As before, I’m not going to introduce TDX itself. If you need a refresher, Intel has good overview material available.

        Comment


        • #5
          Originally posted by stormcrow View Post

          It won't happen even with the source code viewable. The 'ultimate' would be a perfectly black box whose nature cannot be determined from the outside, now or in the future. Inevitably someone, probably the University of the Negev if no one else, will poke at it till they find yet another brilliant side channel that lets people discover the contents of the black box well enough that its confidentiality is compromised. Techies often neglect physical security when they're concentrating so hard on their code. Well between mistakes in code that can lay undiscovered for decades, even with huge numbers of eyes on the code, imperfect hardware, and fallible operators, somewhere you're going to find a way in, and it's just as likely to be an analog (not digital - signals intelligence or social engineering) as it is broken digital assets (bugs or unanticipated side effects) even with the source code being available to audit. "Ultimate" is just marketing dribble.
          So we have a case of Schrodinger's Security where our 'security state' is both secure and insecure at the same time ...

          ... until such time as we can open the box to actually find out which state is true?

          Comment


          • #6
            Great Prosecco "Val d'Oca". (Near Montebelluna)

            Comment


            • #7
              Originally posted by boelthorn View Post
              While SEV-SNP is nice, most of the criticism from this blog post still applies. The device models are all still untrusted. And drivers are usually not written with this in mind.
              it's definitely just marking to say these solutions provide what is needed.

              But if you desire this kind of security, I do think these solutions make it more likely to deliver it.

              A very important part of what is going on here is making the group of people who control what runs on the system smaller and having more guarantees around that. No, not guarantees, but guarantees don't exist, assurances ? Let's say improvements.

              Some things to mention to help illustrate how people do these things:

              In the past I've been at a company who did some credit card processing, the software which would do the payments could only be started if keys were provided, the keys were in a physical safe (this was years ago, things have improved, but it's just to illustrate some things). This is a combination of physical and virtual security.

              A hypervisor does not need a LOT of software, it should be a minimal set. HP had an OpenStack cloud service, supposedly they would do it in such a way that their was no login possible by anyone to the hypervisor machines, no SSH-access, no login on the console, etc. The system would send logs and metrics, that's it and an image would be installed on the machines to make them all the same.

              And we have things like secure boot and signed kernels, software, etc.

              If you combine this all with modern GitOps processes and things like 'workload identity' and TPMs, bunch of attestations all around, then you have a completely auditable system (ignoring the firmware binaries provided by the vendors for now, which all should be signed and have similar processes in place, probably only allowing certain hashes of each firmware on the machines in the datacenter anyway), with pull-requests and code reviews. You know what goes into the system.

              Only the build servers can sign the binaries they make and no access on those machines is possible too.

              This means the group of those creating the system is small. Their is no perfect system, but the least paid people on the ground racking servers are now even less likely to get access to the system. And any contractors who get in the building, etc. etc. Just those who can sign off on the git commit.

              Then maybe the goal has been reached: it does really comes down to finding exploits and side-channel attacks and and similar, but nothing else anymore, greatly raising the bar. Will we need at least a decade as an industry to improve the drivers, etc. Maybe we'll use Rust in the kernel for it, in the hopes it will help prevent a bunch of issues ? I don't know. Time will tell.

              Comment

              Working...
              X