Announcement

Collapse
No announcement yet.

Radeon Linux 4.6 + Mesa 11.3 vs. NVIDIA Linux Performance & Perf-Per-Watt

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by atomsymbol
    As far as I know, an x86 CPU doesn't have a unique private key and public key built into the processor. The CPU has no encryption unit utilizing such a private key.
    Since MS is a "devices and services" company, it going to be "fixed" soon. Most parts of these nasty schemes are already in place. Good luck to get rid of (signed) ME firmware, etc... and welcome to TiVoIzEd x86 world, where PC is no longer open platform. Enjoy by "secure" boot, boot "guards", etc.

    There is no hardware encryption of memory.
    Does not adds much to protection. Dumping or changing data on modern high-speed memory bus is quite a challenge on its own. If someone is up for this challenge, there is good chance they are high-profile experts who are ok with other advanced stuff as well, so I wouldn't count on bus encryption to keep them out. There were numerous cases where attackers bypassed RAM encription. Actually, modern HW often does "scrambling", it is also hardware-specific, mostly for electrical reasons. So authors of tools like rowhammer had to go as far as to create their own FPGA-based solution to improve yield & showcase a better PoC, because getting predictable pattern into ICs on PCs proven to be quite a challenge (so most versions of "rowhammer" things aren't as efficient as they could theoretically be).

    If the encryption-related bits used by Denuvo aren't partially hidden in the hardware (CPU, memory, PCIe card), there exists a way to crack it.
    Once again, numerous systems hiding keys in hardware or using bus encryption were cracked. Hiding key in hardware isn't panacea, it may (or may not) defer hacking, but if there is sufficient demand and ppl are going to pay for it, it going to be cracked eventually. Attackers could go as far as to scan IC die using electronic microscope, reading state of fuses/memories using beam of electrons, if it gives desired yield. And the whole point of public key is the fact you do not have to hide it. You can see how software performs decryption and verify. However, since you lack private key, if you can't replace public key and software doing checks, well, good luck to encrypt/sign your own, modified software. That's how secure boot things are working. Ironically, it mostly used just to lock users out, to ensure they are unable to run any "unapproved" software at all.

    But at the end of day, if attacker could physically access system and fiddle with it, determined attacker would eventually prevail. Efficient way to defer it would be using some heavily customized scheme unique to particular product, so no attackers are familiar with it and have to spend their time AFTER product has been released. If it going to be popular DRM scheme used before, there is high chance attackers would use previous knowledge to crack it quickly. To make it real PITA for attackers, it better to be inherent part of original product, so it nearly impossible to remove it completely. This would make cracking extremely unrewarding, say, plagued by bugs. Such things could put a nasty surprises like e.g. running for few hours so cracker thinks its okay, but then randomly crashing or displaying alerts anyway. Happy cracker releases patch, users are getting plenty of woes and can't contact support for obvious reasons. Cracker gets poor reputations, users "enjoy" by half-working program. Bummer. If program "copy" is truly unique, so it is no longer copy, but a customer-specific build, one could also put plenty of subtle, unique watermarks. It would not stop piracy, but authors would get very clear idea who is pirate. Even a mere build date could be unique identification of customer (and who the hell suspects mere build date is enough to get idea who has pirated the program). E.g. Ida Pro used these techniques a lot, and numerous pirates were caught and their keys were cancelled, despite of attempts to "neuter" watermarks. Even innocent-looking file creation date could be (ab)used as unique customer ID. On downside, custom protection scheme would make develoment more costly and lengthy process.

    Porting:Porting is much easier than cracking.
    Actually, opensource already made cracking pointess in many areas, one could just take opensource programs instead :P.

    Comment


    • #32
      Originally posted by atomsymbol
      I believe that many parts of what you described would be invalidated by each manufactured CPU having a unique private key.
      And how exactly one is supposed to use this private key? I could understand unique PUBLIC keys (or their hashes), where private part is left out of reach so you have to contact vendor to get your code running at all. That's how mobile CPUs are often doing secure boot, though public key do not have to be unique.

      Roughly it looks like this: hash of pubkey written to one-time programmable eFuses. One who possesses this key is a real owner of the system. On boot CPU would load public key, ensure it hash matches fused value and verify first stage of boot loader to be properly signed using this key. Then boot loader could verify further parts of sequence using same (or different) keys. Each stage checks next one. It it is desired not to make it totally locked, one of particular stages would permit adding some extra keys using some rules. Like e.g. demand new keys must be signed by owner of one of preexisting keys. Theoretically, this thing warrants utter lockdown. Practically, such schemes were bypassed crapload of time, one way or another. Something that sounds new on PCs is like 10+ years old crap on mobile devices. Can you imagine it?

      Interestingly, usually eFused key is the same per large batch of devices, it does not affects secure boot enforcement: either way you need private half of the key and somehow it turns you're not possessing it. So you're locked out and have to rely on good will of vendor somehow letting you in at some point of chain of trust, e.g. signing your key with their own key, etc. On downside this scheme only allows programming real vendor once. You can't get your system to behave the way you want it, one who has got fused key is the real owner. You're merely a guest who may or may not be allowed to enter at system's owner discretion. There is no way to unlock this system, though I think it would be fair idea to allow unlocking at cost of loss of ability to play e.g. DRMed stuff and use "protected" apps.

      Consequently, each CPU would have a unique public key. I already assumed this in my previous post where I wrote "u[I]nique private key and public key built into the processor".
      IIRC, Sony PS3 did it somewhat similar way, using hidden CPU key for encrypting their apps, so nobody could even take a look on code. Funny thing is the fact they weren't exactly good in math and random, this allowed ppl to re-compute PS3 private key in really funny way, using pure math, LOL. Still, if each and every cpu key is going to be unique it makes manufacturing and further product management/support really complicated. Someone have to keep large database of keys and somehow allow fine-granied controlled access to some uses of it. Not to mention for public crypto one probably rather wants PUBLIC key in cpu and SECRET key hidden on vendor side, so it is hard to encrypt code properly the way cpu would be able to decrypt and agrees to launch it.

      If I am right, x86 CPUs will get unique private keys within a decade or so.
      If you're blind, nasty secureboot-like lockout schemes are around for more than 10 years dammit.

      (I don't know yet how AMD's Secure Memory Encryption in Zen is related to this, but right now it seems to me that Zen doesn't have a key pair for general purpose use.)
      As you could guess, nobody would let you use these keys for general purpose, unless you'll manufacture CPU for yourself :P. Just take a look around on how eFuses are used.

      I think microcontrollers are doing it more fair way: you can totally unlock IC, writing your own newer code. But before it would be possible, IC would totally erase its content, including all secret keys and "protected" code. So it both quite unpleasant to crack AND allows fair takeover. Then new code owner could re-lock IC on self, protecting own code, if desired.

      Any program that is paid for individually (per unit), by its current nature, contradicts open-source.
      When it comes to programs, paying per unit actually contradicts sanity: cost to create new copy of program is virtually zero, so charging fixed price is basically appears like some kind of scam. TBH it seems crowdfunding-like models are more fair: expenses are honestly declared and project only gets green light if enough humans are up to cover these expences so it could achieve desired goal. Furthermore, if goal wildly exceeded, it is a good practice to offer bonus options to everyone involved, since at this point devs could afford even more time to work on their project. Schemes where everyone have to be locked out are definitely in MAFIAA style, "pay us or be unable to run code at all".

      Comment


      • #33
        Originally posted by atomsymbol
        The CPU will use the private key to decode bytes encoded by the public key. The decoded bytes aren't accessible by software outside of the decoded block(s). The CPU can read the decoded bytes as code and execute the code.

        Games will converge to use this scheme to encode their binary code. When the buyer downloads the game, the download client (e.g: Steam client) will read the public key and send it to the server (e.g: Steam server). The server will use the public key to create a unique binary code block that matches the private key stored in the CPU on which the game will be running.
        Sounds cool. I think this is whar MS is going to do with the TPMs that become more and more mandatory. Do we need that stuff in the CPU? Isn't the TPM enough?

        I'm more worried about this becoming a respin of HDCP than anything else.
        Y'know, that system that gets busted every 2 years or so and is easy to avoid anyway, whose main goal is to cause hardware incompatibility.
        Meanwhile pirates watch DRM-free dvdrips or sometimes bluray rips torrented through PopcornTime or similar netflix-like applications.

        After all, hardware manufacturers already played this game, and their goal never was stopping piracy.
        Last edited by starshipeleven; 23 May 2016, 10:44 AM.

        Comment


        • #34
          Originally posted by atomsymbol
          The CPU will use the private key to decode bytes encoded by the public key. The decoded bytes aren't accessible by software outside of the decoded block(s). The CPU can read the decoded bytes as code and execute the code.
          This puts a lots of management woes and business risks. Say, it implies each and every program vendors must ask permission of CPU manufacturer to sell each and every copy. This is wildly beyond of "1984" and warrants perfect censorship, huge risks for business if cpu vendor refuses to sign new copies, not to mention it would require to encrypt each and every copy on CPU vendor's side.

          If we assume hardcore DRM is the only goal everyone pursues, it sounds technically possible. Yet, DRM does not brings profits on its own. So it only used as some means to deter pirates. As you could guess, this scheme is crackable as well, and it would be easier than you could imagine.

          E.g. games are large, complicated things. They never assume evil, malicious environment on their own. This means attackers could try to cause failures, subverting execution flow in useful ways. They do not really have to crack encryption to get inside. Say, microcontrollers are running in much better conditions: all memories are on same die, it could even generate all clocks (thwarting "clock glitch" attack), but on-chip oscs have very low accuracy, which is not suitable for most applications - only simplest use cases could use inaccurate but secure on-chip OSC. Microcontrollers also use advanced circuitry to check if power good, so power glitching is also complicated. They do it to prevent erroneous operation when batteries run low, etc. It also partially counters some HW-level attacks as side effect. But still, numerous uCs were cracked and protected content extracted, as long as it has been worth of it. Larger CPUs are inherently more vulnerable things since they depend on plenty of external circuits, so attacker who has got physical access could do a plenty of nasty things and there is no way to avoid it. So it only sounds groundbreaking for those boring PC devs, but hardly something new for embedded, mobile devices and many other things. Then there're countless ways to leak the key, beyond your wildest imagination. If you have idea how CMOS works, just measuring power rail pulses would give attacker pretty good idea on what CPU is doing here and now. This implies one could actually recover encryption key by indirect means. For these reasons high-security CPUs e.g. for smart cards are carefully engineered to resist scanning memory, leaking data via power pulese and so on. Only simplest, slow CPU cores could be like this. Larger CPUs security against local attacks is inherently low to medium at very best. The more complicated some system is, the more unexpected failure modes it would expose. This could and would be exploited by attackers.

          Games will converge to use this scheme to encode their binary code. When the buyer downloads the game, the download client (e.g: Steam client) will read the public key and send it to the server (e.g: Steam server). The server will use the public key to create a unique binary code block that matches the private key stored in the CPU on which the game will be running.
          Then immediate attack I can imagine is:
          - Attacker creates keypair and gives public key to steam, pretending it was cpu key. But it was attacker key instead.
          - Steam encrypts it and attacker gets encrypted binary.
          - Attacker decrypts binary and gets unscrewed version of game and free to do whatever.

          It is possible to thwart this attack by keeping database of keys of all cpus around and rejecting non-existent keys. But it would give CPU vendor truly undesirable powers, like censoring everything and even knocking out undesirable companies. Say, if MS would pay Intel a bit, they could just refuse to sign Valve's programs, so everyone have to bow down to Microsoft instead and Valve could go bankrupt. Funny, isn't it?

          Comment


          • #35
            Originally posted by atomsymbol
            Putting a distinct key (RSA) in each CPU would yield a decentralized system. Cracking a single CPU does not automatically imply cracking other CPUs.
            Except the fact attacker could some arbitrary keypair and pretent it was CPU key. Thwarting this kind of attacks would probably take centralised database bringing enormous levels of censorship & centralized control. What if I launched steam in VM and its vCPU returned my own key?

            Comment


            • #36
              Originally posted by atomsymbol
              HDCP is a centralized system.
              HDCP is not centralized. HDCP is a system where there are UNIQUE keys in hardware, checked by hardware. Yes there are baroque systems in place to revoke hardware keys to allegedly compromised devices, and the way to obtain keys legally is a pain in the ass even for manufacturers (hence most asian manufacturers simply don't give a fuck).

              HDCP's main weakness was that it was coded like total crap, seriously they caught at least 2 different unencrypted key exchanges, the master key was fucking reverse-engineered, and meanwhile there are large supplies of 30$ boxes that are a valid HDCP target but then hide any device after them (I know because I use them to fix the retarded incompatibility between 1500+$ pieces of equipment just because HDCP version).

              Behind HDCP there is Intel. I doubt that all morons in the company got moved to the HDCP department, so I suspect that it is a choice, making a low-cost "best-effort" attempt at an antipiracy system, that will probably fail horribly, but that will allow them to render obsolete entire lines of hardware.

              For example, with HDCP every 2 years or so now there is a new version that is not retrocompatible. Fun for all law-abiding citizens.

              Putting a distinct key (RSA) in each CPU would yield a decentralized system. Cracking a single CPU does not automatically imply cracking other CPUs.
              You don't need to crack other CPUs, you only need a key or some compromised hardware to decrypt the program once, then it goes on piratebay as usual. Maybe in a wrapper or whatever.

              As a general rule, placing secret keys in hardware is a good way to get them sniffed.

              Comment


              • #37
                Originally posted by atomsymbol
                Those two terms form a contradiction.
                Nope. the master key is the thing used to generate the private keys of the devices, as they didn't use the gargantuan public database of public keys you are talking about.

                Master key allows easy generation of new compatible keys, it's not integral part of the system once out in the wild.

                How are you proposing to read the private key from the middle of a 14 nanometer CPU?
                Probably by exploiting design flaws. All hardware encryption fails due to design flaws.

                Me? I'd focus on stealing batches of keys from the fabs, BEFORE they are written, or crack the master key, as they will use a master key, making a large database with public keys and stuff is annoying and still risky, as from large amount of public keys they can gather something to bust the encryption.
                Last edited by starshipeleven; 24 May 2016, 11:05 AM.

                Comment


                • #38
                  Originally posted by atomsymbol
                  A couple of terabytes is gargantuan? It isn't year 1995 now.
                  Hm, good point.

                  The evolution of the private key in CPUs will be spread over multiple CPU generations. Successive generations will have smaller number of hardware design flaws.
                  Will not matter unless you render obsolete the older and more flawed CPUs, and if you do so, and it happens at any relevant rate (let alone that of HDCP), the most likely outcome is that game/program/whatever designers choose to NOT use it to avoid heavy flak from both users AND companies that don't usually like to change a zillion CPUs just because someone said so.

                  Seriously, MS got this more right. They want to do something like you said, but the TPMs are their thing.
                  TPM can be usually changed (tablets excluded, but who cares anywyay), so in case they need to make non-retrocompatible changes, they can be changed.

                  AMD/Intel/etc would figure out that the private key needs to be generated by the CPU itself. Only the public key can be read from the CPU.
                  Dunno, it needs a lot of entropy, low-entropy means crappy keys. Entropy must come from outside the chip. Which means that my fictional team will not be stealing keys, but hampering the entropy of a batch of CPUs when they are generating the key, possibly even seeding it.

                  No master key.
                  I know my chicken. There is no reason to make it too safe. Must be "best effort".

                  One could download a large number of RSA public keys from the internet. Nobody cracked RSA yet.
                  I thought you were saying there were many public keys for each private key, that would theoretically allow to reverse-engineer the private, the same as the master key of HDCP.

                  Anyway these are an example of the hacks that could be attempted on the CPUs.

                  A cool hack done at a university uses a hiccup in CPU power supply to get it to cough bits of the private key over some time (probably not applicable if this crypto core uses only inaccessible internal registers, this is just an example) http://www.engadget.com/2010/03/09/1...ng-cpu-of-ele/

                  This one is even funnier http://www.rtl-sdr.com/stealing-encr...tic-emissions/
                  radio emissions from the device are used to sniff the (low-level operations the device is doing, leading to sniffing the) keys.
                  Last edited by starshipeleven; 24 May 2016, 03:03 PM.

                  Comment


                  • #39
                    Originally posted by atomsymbol
                    Solution: The list of all public keys used in CPUs and generated by AMD/Intel/etc will be public. Steam will check that the public key you submit is in the list.
                    Still, DB of this scale is subject to all kinds of abuse, ranging from breaching privacy to attempts to deny competitors access to this informatoins either by DoS attacks or via private agreements.

                    The CPU manufacturer does not keep a list of the private keys used in CPUs. The private key is put into the CPU and all other copies of the private key are destroyed right after it is put into the CPU - the private key exists only in the CPU.
                    Realistically, at least some CPU manufacturers would keep private keys. If we take a look on "engineering logins" they often do exactly that.

                    Comment

                    Working...
                    X