Announcement

Collapse
No announcement yet.

Well Known Linux Kernel Developer Recommends Against Buying Skylake Systems

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    Originally posted by Kano View Post
    Your server workloads are not relevant for desktop users. Also for current Flash or HTML 5 with DRM you need a x86-64 CPU. Try watching Netflix with Power8, have fun...
    I'm betting most of the torrent clients work just fine.

    Comment


    • #62
      Originally posted by jacob View Post
      Which side would you change to? AMD? It's no better blob-wise.
      I've detected they gone quite evil, actually. Just grep "cert" in AMD firmwares, you'll see. Yeah, they're DRM/TiVo style wrenches these days. So at the end of day, not just they insist on using blobs, but aso making sure you can't replace them. Seems DRM/signatures crap happened somewhere around GCN 1.1/R9 285 and some APUs. I wish AMD luck with this approach, but it seems open driver is not of a big value, if you're forced to use tivoized blobs with backdoored anti-user hardware.

      Comment


      • #63
        Originally posted by bridgman View Post
        Hey it's not cheap old tech it's spiffy new tech designed for market trends that didn't happen as quickly as expected (less focus on single-thread performance, more on multi-thread performance)

        Now that DX12 and Vulkan are hitting the market I expect we'll see Bulldozer-family CPU start to look better.
        Zen should be a big jump, assuming it's actually delivered on time, and the process can handle a 3.6 GHz frequency. or so below a 100W TDP on the eight core flagship. Bristol Ridge will be the last Excavator cores, and if your looking for a deal, I expect there will be a price drop on them when the Zen APU's hit next year.

        Comment


        • #64
          Originally posted by SystemCrasher View Post
          I've detected they gone quite evil, actually. Just grep "cert" in AMD firmwares, you'll see. Yeah, they're DRM/TiVo style wrenches these days. So at the end of day, not just they insist on using blobs, but aso making sure you can't replace them. Seems DRM/signatures crap happened somewhere around GCN 1.1/R9 285 and some APUs. I wish AMD luck with this approach, but it seems open driver is not of a big value, if you're forced to use tivoized blobs with backdoored anti-user hardware.
          I would imagine it's the media companies that are pushing it. One more reason to boycott anything DRM.

          Comment


          • #65
            Originally posted by SystemCrasher View Post
            I've detected they gone quite evil, actually. Just grep "cert" in AMD firmwares, you'll see. Yeah, they're DRM/TiVo style wrenches these days. So at the end of day, not just they insist on using blobs, but aso making sure you can't replace them. Seems DRM/signatures crap happened somewhere around GCN 1.1/R9 285 and some APUs. I wish AMD luck with this approach, but it seems open driver is not of a big value, if you're forced to use tivoized blobs with backdoored anti-user hardware.
            Please explain to me again how this is any different from burning the microcode into the chip, which is what everyone seems to wish we were doing ?

            Can you be a bit more specific about "backdoored anti-user hardware" ? I'm not even sure what that means.

            EDIT - if you mean "DRM" then yeah, it's been around for a long time and we've talked about it here maybe 1100 times.

            *bridgman goes off to grep "cert" in our microcode images, not remembering seeing anything like that in the source code.

            EDIT - OK, those are just SAMU certificates, part of the DRM infrastructure on Windows. I hate to break it to you, but DRM has been a big part of chip designs since r300 although the implementation details change every couple of years. We're not actually implementing protected content support on the open drivers anyways, right ?
            Last edited by bridgman; 17 April 2016, 05:09 AM.
            Test signature

            Comment


            • #66
              Originally posted by bridgman View Post
              Please explain to me again how this is any different from burning the microcode into the chip, which is what everyone seems to wish we were doing ?
              Good spot. Yet, there is one major difference, which is quite important to me. When code is burned into ROM, it can't be replaced for purely technical reasons. If someone signs code and locks others out, it getting pretty clear it isn't technical limitation but some malicious intent instead.

              Can you be a bit more specific about "backdoored anti-user hardware" ? I'm not even sure what that means.
              These days digital signature/certificate generally means hardware is likely to posess some evil or malicious intent or potentially unwanted behavior and can't be trusted.

              EDIT - if you mean "DRM" then yeah, it's been around for a long time and we've talked about it here maybe 1100 times.
              I can't remember someone of AMD told GCN 1.1 and somesuch are actually coming with signed firmwares, etc. Or you mean this cert isn't used to sign firmware image but rather internally used for other purposes? Quick observation was the fact older ASICs do not have this suspicious "feature" in their UVD/VCE firmwares.

              *bridgman goes off to grep "cert" in our microcode images, not remembering seeing anything like that in the source code.

              EDIT - OK, those are just SAMU certificates, part of the DRM infrastructure on Windows. I hate to break it to you, but DRM has been a big part of chip designs since r300 although the implementation details change every couple of years. We're not actually implementing protected content support on the open drivers anyways, right ?
              Well, tell us honestly: are dGPU firmwares signed at this point? UVD/VCE firmwares included.

              Speaking for myself, in ideal world I would prefer some eFuse or something like this, so I can blow it up and get "dev-level access" at cost of utterly losing all abilities to deal with DRMed stuff, e.g. it could totally erase or make unavailable all "security" keys and somesuch, allowing to opt-out of such "features" and letting me to break my hardware the way I want it to (at the cost of void warranty ofc). Speaking for myself I do not need DRMed crap but care if I can trust my hardware. Looking on AMD heading "security" coprocessors direction it seems I'm getting major issues with trusting hardware like this, for a reason. Some ARMs are actually providing similar mechanisms, as well as some androids partially implement this kind of idea.
              Last edited by SystemCrasher; 17 April 2016, 10:31 PM.

              Comment


              • #67
                Can you please either stop being so melodramatic or try to be more specific ? I can't really talk to you if every response is going to be like ^^^^

                Originally posted by SystemCrasher View Post
                Good spot. Yet, there is one major difference, which is quite important to me. When code is burned into ROM, it can't be replaced for purely technical reasons. If someone signs code and locks others out, it getting pretty clear it isn't technical limitation but some malicious intent instead.
                Like a desire to provide the same security we would have if the code was burned into the chip, while maintaining the ability to make late-in-development changes or deal with new market conditions (like a desire for open source driver support). Or making sure that someone doesn't mess with touchy parts of the hardware on millions of systems and let the smoke out. What kind of malice did you have in mind ?

                Originally posted by SystemCrasher View Post
                These days digital signature/certificate generally means hardware is likely to posess some evil or malicious intent or potentially unwanted behavior and can't be trusted.
                You're kidding, right ? The primary purpose of certificates is and always has been to allow one entity to have confidence that the other entity is who/what it claims to be. Again, what is this "evil or malicious intent" you keep hinting at ? Please be specific - are you talking about content protection / system robustness or something more ?

                Originally posted by SystemCrasher View Post
                I can't remember someone of AMD told GCN 1.1 and somesuch are actually coming with signed firmwares, etc. Or you mean this cert isn't used to sign firmware image but rather internally used for other purposes? Quick observation was the fact older ASICs do not have this suspicious "feature" in their UVD/VCE firmwares.
                Certificates have nothing to do with signing microcode images. Content protection technology is constantly changing though, it's the old guns vs armour battle (guns always win, which is why I don't work in the DRM team ).

                Originally posted by SystemCrasher View Post
                Well, tell us honestly: are dGPU firmwares signed at this point? UVD/VCE firmwares included.
                Tell us honestly, why does it matter ? We have to implement robust DRM for our larger markets, as do all our competitors, and nobody wants to pay enough extra for Libre-specific hardware to cover the development costs, at least not yet. The good news is that the direction the industry is moving (chain-of-trust from initial boot) is more supportive of open source implementations than what we had before, but there is still a big pile of associated complexity that everyone shies away from, which leads to things like using an MS key for OS images rather than individual HW vendor keys.

                That said, I keep asking "so why do you trust hardware that has closed source microcode burned into ROM" and nobody ever has an answer.

                We keep spiraling around the "oh you can change it so it's software" semantic bafflegab and completely missing the point. If anything I would expect any security-conscious person to *want* pervasive signed closed-source microcode because at the very least that would give them the same level of confidence as they have with microcode-in-ROM HW.

                Originally posted by SystemCrasher View Post
                Speaking for myself, in ideal world I would prefer some eFuse or something like this, so I can blow it up and get "dev-level access" at cost of utterly losing all abilities to deal with DRMed stuff, e.g. it could totally erase or make unavailable all "security" keys and somesuch, allowing to opt-out of such "features" and letting me to break my hardware the way I want it to (at the cost of void warranty ofc). Speaking for myself I do not need DRMed crap but care if I can trust my hardware. Looking on AMD heading "security" coprocessors direction it seems I'm getting major issues with trusting hardware like this, for a reason. Some ARMs are actually providing similar mechanisms, as well as some androids partially implement this kind of idea.
                Yep, I think there is a real need for something like that, in both CPUs and GPUs. The question is whether it's possible to provide that capability to one class of users without putting the security that other users (or major players in the supply chain) expect at risk. It is an ongoing topic of discussion, but what we keep coming back to is that disabling security mechanisms (and making it clear to upper level software that lower level bits can not be trusted) is not enough to allow opening up HW microcode - you actually need different HW controlled by the microcode as well. An e-fuse isn't enough because the toolchains required to support open microcode in that environment would also help you to attack the microcode used in the secure environment.

                I haven't been able to find much in the way of good solutions to the "some people want open-ness and can live without DRM, but nobody wants to pay the extra cost for hardware whose development costs are not subsidized by the DRM-intensive WIndows/Mac markets" dilemma, but if you see good ideas please send them along.

                Bottom line -- focusing on GPU microcode is barking up the wrong tree - it's all about expecting us to open up big chunks of our hardware designs simply because we choose to store the microcode in RAM instead of ROM. You should be demanding that we sign the microcode instead, and checking the hash on microcode images if you don't trust the linux-firmware git commit history.

                IMO the "give me a way to be confident that someone isn't using system management to peek into my running system" concern is a lot more important than "we need open GPU microcode just because you store it in RAM rather than ROM".
                Last edited by bridgman; 18 April 2016, 02:48 AM.
                Test signature

                Comment


                • #68
                  Originally posted by bridgman View Post
                  Can you please either stop being so melodramatic or try to be more specific ? I can't really talk to you if every response is going to be like ^^^^
                  If we're going to be more specific, I'm worried what AMD is up to. Especially after all this "security" processor saga and so on. Which aren't going to bring any security to users, just pwnage and/or HW misbehavior instead. This said, you may or may not share my views and it is really up to you if you want to respond or not. The only thing I really care of is technical correctness of the facts. I.e. I do not want to do full reverse engineering at this point and quickly spotting cert could mean various things. But when I've spotted cert in Intel BIOS in about 2006 I've had forewarning what they're up to. So I've expected stuff like "secure" boot and boot "guard" for a while. Sure, Intel has been pretty much in line with my worst fears, building locked-down ecosystem, something really different from PCs with "open" architecture I knew and got used to.

                  Or simply a desire to provide the same security you would have if the code was burned into the chip, while maintaining the ability to make late-in-development changes or deal with new market conditions (like a desire for open source driver support).
                  There're some differences though. When something is hardwired in ROM, Turing comes into play. Theory claims one program can't reliably analyze another arbitrary program, therefore since I would update OS here and there, ROMed code stands very little chance to do something nasty (like attempt to pwn system on some request) without explicit support on OS side, since it likely to fail on OSes released past of ROM burning date, uncloaking misbehavior. Updateable code on other hand could be updated to do nasty things even in recent systems.

                  Or making sure that someone doesn't mess with touchy parts of the hardware on millions of systems and let the smoke out.
                  I belive B. Franklin has been right when he told those who would give up essential liberty to obtain little temporary safety, deserve neither liberty, nor safety. Looking on how digital sigs and certs perform in HW in practice and how blobs are (ab)used I'm getting very clear idea what he is referred to. I guess quite many Phoronix visitors would agree ability to get code, fiddle with your system the way you want and so on are "essential liberty". Those who do not care about it are running Windows, and I can see where to this road leads: locked down hardware running heavily backdoored OS, coming with really nasty EULA. That's where one ends after giving up their liberties.

                  I would agree modern HW may need some protection at SW level. But it does not means one have to seize all control over systems and establish super-authority which could and would exclusively dictate others how they should use their systems, etc. It sounds like Orwell's 1984.

                  What kind of malice did you have in mind ?
                  Could be absolutely anything. I've seen BIOSes rejecting "wrong" devices. ME firmwares coming with backdoors. In world like this I have to expect absolutely anything in blobs. OTOH I've not seen smoke coming out of hardware except one single case: nvidia's driver halting fan, letting IC to reach bizarre themperatures and fail. Ironically, signatures would not help in this case, it's vendor's code what had grave bugs.

                  You're kidding, right ? The primary purpose of certificates is and always has been to allow one entity to have confidence that the other entity is who/what it claims to be.
                  There're some things which aren't in line with this idea.
                  - I have no slightest idea what is this CA and what it is up to.
                  - I haven't explicitly chosen to trust this authority.
                  - I have no even slightest idea what they could or would sign.
                  - I have no idea who could be in possession of private keys.
                  - I have no idea about legal or key management policies in effect, etc.
                  - There are usually no means to make my own key(s) and put these in place of unknown keys instead, so I could actually trust it.

                  So for me it means exactly nothing. Except the fact I've been locked out and no longer own my hardware, because some other entity dares to order me how I should use it and actively denies me some rights. So it no longer ownership, it turns into "lease" (very expensive, btw) or "managed service" (while I haven't asked for this kind of thing).

                  Again, what is this "evil or malicious intent" you keep hinting at ? Please be specific - are you talking about content protection / system robustness or something more ?
                  About overall system security. When someone puts digital locks and then aggressively demands to "trust" 'em to ensure thinsgs are "secure" I'm quickly getting idea new jail welcomes me.

                  Certificates have nothing to do with signing microcode images.
                  Ok, then I guess it has been false alarm. But it made me shitting bricks very well, since if AMD would seriously lock down their hardware, "for better security" or whatever, I would not use it I guess. It seems it already happened to at least some APUs whuch are IIRC unusable without "security" coprocessor code (so it not going to improve MY security).

                  Tell us honestly, why does it matter ?
                  Because usefulness of lock strongly depends on who is possessing the key . So far when some HW vendor talks about "security" it ends up in lockdowns, and it turns out system owners can't possess the keys. So they aren't owners anymore, just some guests. Ironically, consumes sometimes even fail to get idea they aren't real owners of their systems anymore.

                  Yep, I think there is a real need for something like that, in both CPUs and GPUs. The question is whether it's possible to provide that capability to one class of users without putting the security that other users expect at risk.
                  I guess it must require some hardware-level action, like putting some jumper, pushing button or shorting testpoints to ensure SW can't do nasty thing on its own. One of obvious ideas is: eFuses block could be utterly missing Vpp programming supply by default, being readonly. So software can't hurt it, until someone explicitly decides they need to take it over, whatever it takes, so HW action enabling Vpp supply is taken and software is told to bring it on at same time. Similar idea for WP# write protection signal could work as well. Microcontrollers often allow one to completely overtake IC back into "unsecured" state, taking full control back, BUT IC implements HW state machine which cleans up all internal content, be it code/data or keys, prior to resetting protection state fuses to vanilla (erased) state. So one either can't get in or gets "vanilla" IC. Without any secrets. Then they could do whatever, but it wouldn't help get private keys, "protected" code and so on. Then one could even "relock" microcontroller again, protecting own code/data/keys. That's what I call fair feature implementation.

                  It is an ongoing topic of discussion, but what we keep coming back to is that disabling security mechanisms (and making it clear to upper level software that lower level bits can not be trusted) is not enough to allow opening up HW microcode - you actually need different HW controlled by the microcode as well.
                  Still, Google publishes Chromebooks EC firmware sources these days, which is quite sensitive part of software, right? So it seems opening firmware code do not necessarily jeopardizes system security. But sure, it takes some HW design considerations.

                  I haven't been able to find much in the way of good solutions to the "some people want open-ness and can live without DRM, but nobody wants to pay the extra cost for hardware whose development costs are not subsidized by the DRM-intensive WIndows/Mac markets" dilemma, but if you see good ideas please send them along.
                  I'm afraid it would soon turn PCs known for being open ecosystem into yet another XBOX/iPhone kind of thing. Though it going to be really strange in long run, since consumers would be generally better with light and low power mobile devices (which isn't about PC) and those using computers to create various stuff do not really need or want digital locks, so I do not get who is going to be target audience of this kind of thing in long run. Speaking for myself I'm not really sure my next computer-like thing is going to be x86 at all, thanks to Wintel efforts to make PCs locked down/backdoored/troublesome platform. Not something I've valued in PCs so far. Ironically, these days e.g. ARMs are getting quite powerful and I can boot at least some of these completely blob-free, with zero resident [del]evil[/del] proprietary code footprint. Some ARM systems would even allow one to put their own code into TrustZone, actually putting this feature into service, e.g. to make it harder to steal encryption keys. But it really varies across vendors and as far as I understand, when it comes to ARMs, AMD has decided to go for total lockdown instead, not letting system owner into TrustZone at all costs. I do not need or want devices protecting themselves from ... me.

                  Part of the change you're seeing in our HW and SW is moving to a more modular approach, where modular drivers can easily adapt to mix & match hardware blocks without having to write semi-new drivers for each chip, but AFAICS there is still a disconnect between the cost of developing different blocks that can have open microcode and the size of the market willing to buy those parts and give up the associated security.
                  Speaking for myself, maybe I'm overlypessimistic, but I'm seeing PC ecosystem being locked down. Bringing treachery, lack of proper information about HW properties, lockouts and overengineering instead of actual security and predictability. Some kernel devs already got idea what's going on. It seems PC is trying to become something in between of xbox and iphone. Somehow I always valued PCs being open-minded platform, I can't tell same about xbox or iphone. And whatever, but just pushing digital locks one level further from user-mode or kernel to firmware is not an answer. At the end of day, those who care of openness hardly seeking for formal badges. They're seeking for world of open possibilities. If there're locks but no keys, it is jail. Regardless of exact levels where locks are implemented.

                  Comment


                  • #69
                    Quick answer for now...

                    1. You are correct that the PC ecosystem is being locked down, but the fact we share HW R&D between Linux/Windows/Mac doesn't mean all those locks are going into Linux. Most of the things that are triggering red flags for you are common-or-garden content protection. Content protection, DRM, whatever you want to call it does mean that decisions related to certain materials (typically video but could be other stuff) are controlled by the copyright holder not by the sys admin. That has been the case for a decade or so, the tech just keeps getting better (but we don't use it on consumer Linux).

                    2. AFAIK requiring physical access to bypass security is not sufficient, it still leaves the door open for "social engineering" attacks. Customers who want security generally do *not* want there to be an easy way of turning it off.

                    3. Not sure about latest round of security processing but our previous versions have been explicitly designed to let third party code run on the processors. Will find out current status, although I may not be able to say anything about it.

                    4. I suspect that some of your concerns have their roots in "coincidence vs causality"... that's the point I was trying to make in previous response. There are a few different trends going on in parallel, but there are probably only a couple of hundred people in the world who can keep all the threads straight and know which parts to worry about and which parts are just passing annoyances.

                    There is a systemic problem in both the consumer electronics industry and government in general where "the easiest solution is to just take away all freedoms". It doesn't have to be that way but (a) the alternatives which provide security while maintaining freedom tend to be more expensive (higher R&D, less cost sharing between markets), and (b) nobody seems willing to cover the cost (preferring to buy the cheaper but zero-freedom solution 99.9% of the time), so vendors get forced into a one-size-fits-all security approach because the alternative is losing customers.

                    I would feel more positive about the chances of consumer electronics customers being willing to cover the cost of maintaining freedoms if we didn't keep electing "give us all your money and trust us, we'll tell you lies to make you happy" governments, like we just did up here at pretty much all levels
                    Last edited by bridgman; 18 April 2016, 10:29 PM.
                    Test signature

                    Comment


                    • #70
                      Originally posted by Kano View Post
                      At least the Package C8 is reached. Is it using SATA or NVME?
                      It seems to be using SATA for my HDD (no SSD installed). No mention of NVME in dmesg or lspci. It has also an NV 960M installed (but no proprietary driver installed).

                      Comment

                      Working...
                      X