Originally posted by Kano
View Post
Announcement
Collapse
No announcement yet.
Well Known Linux Kernel Developer Recommends Against Buying Skylake Systems
Collapse
X
-
Originally posted by jacob View PostWhich side would you change to? AMD? It's no better blob-wise.
Comment
-
Originally posted by bridgman View PostHey it's not cheap old tech it's spiffy new tech designed for market trends that didn't happen as quickly as expected (less focus on single-thread performance, more on multi-thread performance)
Now that DX12 and Vulkan are hitting the market I expect we'll see Bulldozer-family CPU start to look better.
Comment
-
Originally posted by SystemCrasher View PostI've detected they gone quite evil, actually. Just grep "cert" in AMD firmwares, you'll see. Yeah, they're DRM/TiVo style wrenches these days. So at the end of day, not just they insist on using blobs, but aso making sure you can't replace them. Seems DRM/signatures crap happened somewhere around GCN 1.1/R9 285 and some APUs. I wish AMD luck with this approach, but it seems open driver is not of a big value, if you're forced to use tivoized blobs with backdoored anti-user hardware.
- Likes 1
Comment
-
Originally posted by SystemCrasher View PostI've detected they gone quite evil, actually. Just grep "cert" in AMD firmwares, you'll see. Yeah, they're DRM/TiVo style wrenches these days. So at the end of day, not just they insist on using blobs, but aso making sure you can't replace them. Seems DRM/signatures crap happened somewhere around GCN 1.1/R9 285 and some APUs. I wish AMD luck with this approach, but it seems open driver is not of a big value, if you're forced to use tivoized blobs with backdoored anti-user hardware.
Can you be a bit more specific about "backdoored anti-user hardware" ? I'm not even sure what that means.
EDIT - if you mean "DRM" then yeah, it's been around for a long time and we've talked about it here maybe 1100 times.
*bridgman goes off to grep "cert" in our microcode images, not remembering seeing anything like that in the source code.
EDIT - OK, those are just SAMU certificates, part of the DRM infrastructure on Windows. I hate to break it to you, but DRM has been a big part of chip designs since r300 although the implementation details change every couple of years. We're not actually implementing protected content support on the open drivers anyways, right ?Last edited by bridgman; 17 April 2016, 05:09 AM.Test signature
- Likes 2
Comment
-
Originally posted by bridgman View PostPlease explain to me again how this is any different from burning the microcode into the chip, which is what everyone seems to wish we were doing ?
Can you be a bit more specific about "backdoored anti-user hardware" ? I'm not even sure what that means.
EDIT - if you mean "DRM" then yeah, it's been around for a long time and we've talked about it here maybe 1100 times.
*bridgman goes off to grep "cert" in our microcode images, not remembering seeing anything like that in the source code.
EDIT - OK, those are just SAMU certificates, part of the DRM infrastructure on Windows. I hate to break it to you, but DRM has been a big part of chip designs since r300 although the implementation details change every couple of years. We're not actually implementing protected content support on the open drivers anyways, right ?
Speaking for myself, in ideal world I would prefer some eFuse or something like this, so I can blow it up and get "dev-level access" at cost of utterly losing all abilities to deal with DRMed stuff, e.g. it could totally erase or make unavailable all "security" keys and somesuch, allowing to opt-out of such "features" and letting me to break my hardware the way I want it to (at the cost of void warranty ofc). Speaking for myself I do not need DRMed crap but care if I can trust my hardware. Looking on AMD heading "security" coprocessors direction it seems I'm getting major issues with trusting hardware like this, for a reason. Some ARMs are actually providing similar mechanisms, as well as some androids partially implement this kind of idea.Last edited by SystemCrasher; 17 April 2016, 10:31 PM.
Comment
-
Can you please either stop being so melodramatic or try to be more specific ? I can't really talk to you if every response is going to be like ^^^^
Originally posted by SystemCrasher View PostGood spot. Yet, there is one major difference, which is quite important to me. When code is burned into ROM, it can't be replaced for purely technical reasons. If someone signs code and locks others out, it getting pretty clear it isn't technical limitation but some malicious intent instead.
Originally posted by SystemCrasher View PostThese days digital signature/certificate generally means hardware is likely to posess some evil or malicious intent or potentially unwanted behavior and can't be trusted.
Originally posted by SystemCrasher View PostI can't remember someone of AMD told GCN 1.1 and somesuch are actually coming with signed firmwares, etc. Or you mean this cert isn't used to sign firmware image but rather internally used for other purposes? Quick observation was the fact older ASICs do not have this suspicious "feature" in their UVD/VCE firmwares.
Originally posted by SystemCrasher View PostWell, tell us honestly: are dGPU firmwares signed at this point? UVD/VCE firmwares included.
That said, I keep asking "so why do you trust hardware that has closed source microcode burned into ROM" and nobody ever has an answer.
We keep spiraling around the "oh you can change it so it's software" semantic bafflegab and completely missing the point. If anything I would expect any security-conscious person to *want* pervasive signed closed-source microcode because at the very least that would give them the same level of confidence as they have with microcode-in-ROM HW.
Originally posted by SystemCrasher View PostSpeaking for myself, in ideal world I would prefer some eFuse or something like this, so I can blow it up and get "dev-level access" at cost of utterly losing all abilities to deal with DRMed stuff, e.g. it could totally erase or make unavailable all "security" keys and somesuch, allowing to opt-out of such "features" and letting me to break my hardware the way I want it to (at the cost of void warranty ofc). Speaking for myself I do not need DRMed crap but care if I can trust my hardware. Looking on AMD heading "security" coprocessors direction it seems I'm getting major issues with trusting hardware like this, for a reason. Some ARMs are actually providing similar mechanisms, as well as some androids partially implement this kind of idea.
I haven't been able to find much in the way of good solutions to the "some people want open-ness and can live without DRM, but nobody wants to pay the extra cost for hardware whose development costs are not subsidized by the DRM-intensive WIndows/Mac markets" dilemma, but if you see good ideas please send them along.
Bottom line -- focusing on GPU microcode is barking up the wrong tree - it's all about expecting us to open up big chunks of our hardware designs simply because we choose to store the microcode in RAM instead of ROM. You should be demanding that we sign the microcode instead, and checking the hash on microcode images if you don't trust the linux-firmware git commit history.
IMO the "give me a way to be confident that someone isn't using system management to peek into my running system" concern is a lot more important than "we need open GPU microcode just because you store it in RAM rather than ROM".Last edited by bridgman; 18 April 2016, 02:48 AM.Test signature
- Likes 3
Comment
-
Originally posted by bridgman View PostCan you please either stop being so melodramatic or try to be more specific ? I can't really talk to you if every response is going to be like ^^^^
Or simply a desire to provide the same security you would have if the code was burned into the chip, while maintaining the ability to make late-in-development changes or deal with new market conditions (like a desire for open source driver support).
Or making sure that someone doesn't mess with touchy parts of the hardware on millions of systems and let the smoke out.
I would agree modern HW may need some protection at SW level. But it does not means one have to seize all control over systems and establish super-authority which could and would exclusively dictate others how they should use their systems, etc. It sounds like Orwell's 1984.
What kind of malice did you have in mind ?
You're kidding, right ? The primary purpose of certificates is and always has been to allow one entity to have confidence that the other entity is who/what it claims to be.
- I have no slightest idea what is this CA and what it is up to.
- I haven't explicitly chosen to trust this authority.
- I have no even slightest idea what they could or would sign.
- I have no idea who could be in possession of private keys.
- I have no idea about legal or key management policies in effect, etc.
- There are usually no means to make my own key(s) and put these in place of unknown keys instead, so I could actually trust it.
So for me it means exactly nothing. Except the fact I've been locked out and no longer own my hardware, because some other entity dares to order me how I should use it and actively denies me some rights. So it no longer ownership, it turns into "lease" (very expensive, btw) or "managed service" (while I haven't asked for this kind of thing).
Again, what is this "evil or malicious intent" you keep hinting at ? Please be specific - are you talking about content protection / system robustness or something more ?
Certificates have nothing to do with signing microcode images.
Tell us honestly, why does it matter ?
Yep, I think there is a real need for something like that, in both CPUs and GPUs. The question is whether it's possible to provide that capability to one class of users without putting the security that other users expect at risk.
It is an ongoing topic of discussion, but what we keep coming back to is that disabling security mechanisms (and making it clear to upper level software that lower level bits can not be trusted) is not enough to allow opening up HW microcode - you actually need different HW controlled by the microcode as well.
I haven't been able to find much in the way of good solutions to the "some people want open-ness and can live without DRM, but nobody wants to pay the extra cost for hardware whose development costs are not subsidized by the DRM-intensive WIndows/Mac markets" dilemma, but if you see good ideas please send them along.
Part of the change you're seeing in our HW and SW is moving to a more modular approach, where modular drivers can easily adapt to mix & match hardware blocks without having to write semi-new drivers for each chip, but AFAICS there is still a disconnect between the cost of developing different blocks that can have open microcode and the size of the market willing to buy those parts and give up the associated security.
Comment
-
Quick answer for now...
1. You are correct that the PC ecosystem is being locked down, but the fact we share HW R&D between Linux/Windows/Mac doesn't mean all those locks are going into Linux. Most of the things that are triggering red flags for you are common-or-garden content protection. Content protection, DRM, whatever you want to call it does mean that decisions related to certain materials (typically video but could be other stuff) are controlled by the copyright holder not by the sys admin. That has been the case for a decade or so, the tech just keeps getting better (but we don't use it on consumer Linux).
2. AFAIK requiring physical access to bypass security is not sufficient, it still leaves the door open for "social engineering" attacks. Customers who want security generally do *not* want there to be an easy way of turning it off.
3. Not sure about latest round of security processing but our previous versions have been explicitly designed to let third party code run on the processors. Will find out current status, although I may not be able to say anything about it.
4. I suspect that some of your concerns have their roots in "coincidence vs causality"... that's the point I was trying to make in previous response. There are a few different trends going on in parallel, but there are probably only a couple of hundred people in the world who can keep all the threads straight and know which parts to worry about and which parts are just passing annoyances.
There is a systemic problem in both the consumer electronics industry and government in general where "the easiest solution is to just take away all freedoms". It doesn't have to be that way but (a) the alternatives which provide security while maintaining freedom tend to be more expensive (higher R&D, less cost sharing between markets), and (b) nobody seems willing to cover the cost (preferring to buy the cheaper but zero-freedom solution 99.9% of the time), so vendors get forced into a one-size-fits-all security approach because the alternative is losing customers.
I would feel more positive about the chances of consumer electronics customers being willing to cover the cost of maintaining freedoms if we didn't keep electing "give us all your money and trust us, we'll tell you lies to make you happy" governments, like we just did up here at pretty much all levelsLast edited by bridgman; 18 April 2016, 10:29 PM.Test signature
- Likes 1
Comment
-
Comment