Announcement

Collapse
No announcement yet.

Random Is Faster, More Randomness In Linux 3.13

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Random Is Faster, More Randomness In Linux 3.13

    Phoronix: Random Is Faster, More Randomness In Linux 3.13

    The /dev/random changes went in for the Linux 3.13 kernel and this pull request was even interesting for the very promising next kernel release. While not in Linux 3.13, it's mentioned the Linux kernel might also end up taking a security feature from the FreeBSD playbook...

    http://www.phoronix.com/vr.php?view=MTUxNjk

  • #2
    Am I the only one who read the "Random", "Radeon"? Michael is heavily covering radeon drivers news these days

    Comment


    • #3
      The Android flaw was bad for Bitcoin wallets as this bug caused the "randomness" in Android devices(Android based on Linux kernel!) to be predictable. That way private keys could be generated that had been generated "randomly" so the attacker could thus "generate randomly" the exact same private keys that somebody else had generated on the Android device.
      This randomness and entropy improvement to Linux is of course good but Google is too stupid to care right now, so don't generate your private keys on Android, but generate them on Linux, *BSD, then you import them to your Android device. Android is a sinking ship...

      Comment


      • #4
        Originally posted by powdigsig View Post
        Android is a sinking ship...
        Let's not go nuts. Certainly, Android is a subset of Linux with incredibly crappy Java-based userland. But sales are better than ever: http://www.gartner.com/newsroom/id/2573415

        Smartphones may be twice as fast as they were in 1973, but your average consumer is as drunk and stupid as ever.

        Comment


        • #5
          Originally posted by powdigsig View Post
          The Android flaw was bad for Bitcoin wallets as this bug caused the "randomness" in Android devices(Android based on Linux kernel!) to be predictable. That way private keys could be generated that had been generated "randomly" so the attacker could thus "generate randomly" the exact same private keys that somebody else had generated on the Android device.
          This randomness and entropy improvement to Linux is of course good but Google is too stupid to care right now, so don't generate your private keys on Android, but generate them on Linux, *BSD, then you import them to your Android device. Android is a sinking ship...
          sorry, but you obviously have no idea what you're talking about.

          There have been at least 3 indirection layers between /dev/random and the Bitcoin application. Just because there is a bug in dalvik, doesn't make /dev/random faulty. As much as it can interfere with a good rant, taking off the fanboy goggles is good for your health.

          Comment


          • #6
            While I never understood how people can find predictability in /dev/random on any system, I also don't understand how people who care about pure random numbers don't make a USB device that actually generates purely random numbers. I remember hearing about how it is possible to get pure 100% random number (in a digital perspective) using a very tiny amount of an element like Americium and have some sensors that read the gamma rays that are emitted. While nothing in physics is 100% unpredictable, these gamma rays are generated at the atomic level, which is so hard to measure that you might as well call it perfectly random. I figure the only problem with this type of device is it's likely affected by temperature. You still won't be able to predict the exact number it generates but you can at least figure out the range it would be in. So for example, if it's 20C in the room, you might get a number from 10000 to 50000 but if it's 30C you might get a number from 20000 to 80000. I could be wrong though.

            Comment


            • #7
              Originally posted by siavashserver View Post
              Am I the only one who read the "Random", "Radeon"? Michael is heavily covering radeon drivers news these days
              I accidentally read it as "Radeon" as well. xD

              Comment


              • #8
                Originally posted by schmidtbag View Post
                While nothing in physics is 100% unpredictable,
                Wave function collapse is truely random and can be used to generate 100% random numbers. It is in fact the only source of true randomness in our Universe.

                Comment


                • #9
                  Originally posted by schmidtbag View Post
                  While I never understood how people can find predictability in /dev/random on any system
                  People can find predictability with /dev/urandom, not with /dev/random. But that's kind of by design... /dev/urandom was created so that reads from it would never block even when the system hasn't generated enough entropy. Most apps don't need cryptographically strong randomness, and those that do should be using /dev/random.

                  Comment


                  • #10
                    Originally posted by schmidtbag View Post
                    While I never understood how people can find predictability in /dev/random on any system,
                    They find patterns that weren't earlier identified in the entropy source. It's not like you can exactly predict what comes out, you can just more accurately identify the limited entropy in the pool enough to be able to enumerate the possible outcomes.
                    I also don't understand how people who care about pure random numbers
                    Which should be everyone who uses the internet - the security of things like TLS relies on it. Unless you like just handing out login credentials to everyone between you and the site, or your credit card info.
                    don't make a USB device that actually generates purely random numbers.
                    Easier said than done. "Randomness" is something that is easy to disprove once you know what's wrong with it, but essentially impossible to really prove. The problem isn't a good source of randomness -- the existing I/O devices we have are good enough on that front -- the problem is being able to distinguish which of the numbers you're getting are and aren't random. How does the linux kernel know whether 10101010 was a byte of patterned data or random data? By itself, it's impossible to tell, since all random data consists of equally likely bit strings, and any finite numerical sequence, including bit sequences, can be represented by a function (and in fact infinitely many functions). The only thing it can do is run statistical tests on the sequences it's getting, which can assign probabilities to how likely it is that the given sequence is random.
                    I remember hearing about how it is possible to get pure 100% random number (in a digital perspective) using a very tiny amount of an element like Americium and have some sensors that read the gamma rays that are emitted.
                    Possible, yes, but not that marketable, especially given that again, current hardware sources generate enough randomness (the milliseconds between keystrokes when you type, for example are usually good enough when you get enough of them). Again though, there are a million things that could go wrong with your "pure" entropy source: faulty hardware, electromagnetic interference in the line, chain reactions that skew the weight, etc. etc. such that the raw 1s and 0s the kernel gets aren't random enough for, say, a good GPG key. Though that sort of device would almost certainly alleviate most of the problems that people extol with current /dev/random implementations today (though not all, such as insufficient entropy at boot time, since it takes a while to gather entropy).
                    While nothing in physics is 100% unpredictable, these gamma rays are generated at the atomic level, which is so hard to measure that you might as well call it perfectly random.
                    No, in this case you're using quantum mechanics, which means the process itself is perfectly random. It's other factors you'd have to worry about.
                    I figure the only problem with this type of device is it's likely affected by temperature. You still won't be able to predict the exact number it generates but you can at least figure out the range it would be in. So for example, if it's 20C in the room, you might get a number from 10000 to 50000 but if it's 30C you might get a number from 20000 to 80000. I could be wrong though.
                    Thermodynamic noise could be an issue I suppose, yes.

                    Originally posted by damg View Post
                    People can find predictability with /dev/urandom, not with /dev/random.
                    No, they find predictability with both.
                    But that's kind of by design... /dev/urandom was created so that reads from it would never block even when the system hasn't generated enough entropy.
                    Sort of correct, but /dev/urandom still runs the existing entropy pool through PRNGs to extend it (/dev/random uses PRNGs too, but not to extend entropy, only to sanitize it). That means it's not predictable "by design," and really you should never be able to tell the difference between the output of /dev/random and /dev/urandom, it's just that /dev/urandom will generate PRNs using entropy amounts arbitrarily smaller than the desired output. For example, a machine that is somehow really broken and just feeds 0s to the entropy pool will have the same amount of entropy at boot as for the rest of the session coming out of /dev/urandom. But if you just use up all your entropy on a normal machine after you've had some at some point, /dev/urandom would be far more secure of a source that the broken machine.
                    Most apps don't need cryptographically strong randomness, and those that do should be using /dev/random.
                    "Should" being the opportune word here. I can't recall where I saw the statistic, but a depressing number of cryptographic android services were found to use /dev/urandom to improve performance.

                    Comment


                    • #11
                      Originally posted by Szzz View Post
                      Wave function collapse is truely random and can be used to generate 100% random numbers. It is in fact the only source of true randomness in our Universe.
                      Originally posted by tga.d View Post
                      No, in this case you're using quantum mechanics, which means the process itself is perfectly random.
                      This very much depends on your interpretation of quantum mechanics. Even interpretations which include collapsing wavefunctions do not necessarily include randomness/nondeterminism. (Also the wave function can be restored if you forget what you measured according to some interpretations.)

                      http://en.wikipedia.org/wiki/Interpr...nterpretations

                      Comment


                      • #12
                        Originally posted by chithanh View Post
                        This very much depends on your interpretation of quantum mechanics. Even interpretations which include collapsing wavefunctions do not necessarily include randomness/nondeterminism. (Also the wave function can be restored if you forget what you measured according to some interpretations.)

                        http://en.wikipedia.org/wiki/Interpr...nterpretations
                        I certainly don't fully understand QM, but I was under the impression that the effects are pretty much undeniably empirically random -- even if there is some "deterministic" explanation as to why, you can't use it to predict outcomes with anything beyond probabilistic certainty.

                        Comment


                        • #13
                          Originally posted by schmidtbag View Post
                          While I never understood how people can find predictability in /dev/random on any system, I also don't understand how people who care about pure random numbers don't make a USB device that actually generates purely random numbers. I remember hearing about how it is possible to get pure 100% random number (in a digital perspective) using a very tiny amount of an element like Americium and have some sensors that read the gamma rays that are emitted. While nothing in physics is 100% unpredictable, these gamma rays are generated at the atomic level, which is so hard to measure that you might as well call it perfectly random. I figure the only problem with this type of device is it's likely affected by temperature. You still won't be able to predict the exact number it generates but you can at least figure out the range it would be in. So for example, if it's 20C in the room, you might get a number from 10000 to 50000 but if it's 30C you might get a number from 20000 to 80000. I could be wrong though.
                          You can get 100% random numbers by simply looking at the least significant digits of system time when events occur. There are advantages to having jitter in those cases Of course, that is a pretty slow way to gather data thus why systems with tons of things happening on them, like servers, can generate so much randomness.

                          Comment


                          • #14
                            Originally posted by tga.d View Post
                            I certainly don't fully understand QM, but I was under the impression that the effects are pretty much undeniably empirically random -- even if there is some "deterministic" explanation as to why, you can't use it to predict outcomes with anything beyond probabilistic certainty.
                            I don't know that probabalistic and randomness are the same thing. IIRC, the nsa had a probablistic exploit for some piece of software (something...about....key generation maybe?) but a random process would make any single event equally likely as any other while the nsa took advantage of certain specific values occuring more often than they should had the generator been random.

                            Comment


                            • #15
                              Originally posted by liam View Post
                              I don't know that probabalistic and randomness are the same thing. IIRC, the nsa had a probablistic exploit for some piece of software (something...about....key generation maybe?) but a random process would make any single event equally likely as any other while the nsa took advantage of certain specific values occuring more often than they should had the generator been random.
                              Yes, a probabilistic process and a random process are the same. If we define "randomness" as entropy, since entropy is really all that matters in this setting, it's just that uneven probabilities make the entropy weaker. I have a blog post that sort of goes into the details in the math section, though not exactly:
                              http://tgad.wordpress.com/2013/09/18/what-is-entropy/
                              You don't have to read all that though, the important part is this:
                              H(X) = P(x_1)*-log(P(x_1))+P(x_2)*-log(P(x_2))+...+P(x_n)*-log(P(x_n))
                              where H(X) is the total entropy, and P(x_i) is the probability of a particular outcome. That alone proves that entropy is entirely defined by probability, but you bring up a good point about events that aren't equally likely, so let's compare a couple values.

                              First, a flip of a fair coin, where we'll define the random variable X to be the function mapping "heads" to 0 and "tails" to 1 (I go a little bit into what that really means in the post, though again the specifics aren't really important).
                              H(X) = P(0)*-log(P(0))+P(1)*-log(P(1)) = 1/2*1 + 1/2*1 = 1 bit of entropy (using log base 2). Exactly as expected: a flip of a coin adds one bit of entropy to the output.

                              Now let's say it's a loaded coin, with one side, let's say tails, having probability 3/4 (and the other having 1/4, since the probability of all outcomes always add to 1).
                              H(X) = P(0)*-log(P(0))+P(1)*-log(P(1)) = 1/4*2 + 3/4*0.415 = 0.811 bits of entropy. The astute observer will notice that 0.811 < 1. So making the coin a loaded coin reduced the entropy of the coin flip by almost 20%, *but it still has entropy*. If I flip my loaded coin twice, I have 1.62 bits of entropy, which is more than if I flipped the non-loaded coin once. It's fairly well known that you can add entropy values, but we'll go ahead and do the math to be sure on this. Define X to be two coin flips, mapping heads to 0 and tails to 1 (e.g. "heads tails" is 01 and "tails heads" is 10).
                              Two flips of a fair coin:
                              H(X) = P(00)*-log(P(00)) + P(01)*-log(P(01)) + P(10)*-log(P(10)) + P(11)*-log(P(11))
                              = 1/2*1/2*2+1/2*1/2*2+1/2*1/2*2+1/2*1/2*2 = 2 bits of entropy. Again, just as expected.

                              Two flips of the unfair coin:
                              H(X) = P(00)*-log(P(00)) + P(01)*-log(P(01)) + P(10)*-log(P(10)) + P(11)*-log(P(11))
                              = 1/4*1/4*4+1/4*3/4*2.42+3/4*1/4*2.42+3/4*3/4*0.830 = 1.62 bits of entropy, yet again the value that was expected.

                              The point is, you can use events that don't have equal probability as a source of entropy. So then, what happened with the NSA, taking advantage of non-equally likely events? Well, the problem doesn't occur because the events aren't equally likely; the occur because the people using those events assume that they actually were equally likely, so the entropy they had was actually less than what they thought it was. If I need 100 bits of entropy, and I'm using coin flips, I'm going to assume that I need 100 coin flips. If it turns out though that the coin was the loaded coin, I'd only really have 81 bits of entropy, thinking I had 100, and that's a big problem. But if I know that the coin is the loaded coin, I can just use 124 flips and have 100.6 bits of entropy at my disposal, just as good as if I'd used the fair coin (just that I need 24 additional flips). So as long as you know the probability of each event you're using to generate your entropy pool, you can use whatever sort of event you'd like (though ones with more even distributions will generate entropy faster).

                              Comment

                              Working...
                              X