Announcement

Collapse
No announcement yet.

Random Is Faster, More Randomness In Linux 3.13

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by chithanh View Post
    This very much depends on your interpretation of quantum mechanics. Even interpretations which include collapsing wavefunctions do not necessarily include randomness/nondeterminism. (Also the wave function can be restored if you forget what you measured according to some interpretations.)

    http://en.wikipedia.org/wiki/Interpr...nterpretations
    I certainly don't fully understand QM, but I was under the impression that the effects are pretty much undeniably empirically random -- even if there is some "deterministic" explanation as to why, you can't use it to predict outcomes with anything beyond probabilistic certainty.

    Comment


    • #12
      Originally posted by schmidtbag View Post
      While I never understood how people can find predictability in /dev/random on any system, I also don't understand how people who care about pure random numbers don't make a USB device that actually generates purely random numbers. I remember hearing about how it is possible to get pure 100% random number (in a digital perspective) using a very tiny amount of an element like Americium and have some sensors that read the gamma rays that are emitted. While nothing in physics is 100% unpredictable, these gamma rays are generated at the atomic level, which is so hard to measure that you might as well call it perfectly random. I figure the only problem with this type of device is it's likely affected by temperature. You still won't be able to predict the exact number it generates but you can at least figure out the range it would be in. So for example, if it's 20C in the room, you might get a number from 10000 to 50000 but if it's 30C you might get a number from 20000 to 80000. I could be wrong though.
      You can get 100% random numbers by simply looking at the least significant digits of system time when events occur. There are advantages to having jitter in those cases Of course, that is a pretty slow way to gather data thus why systems with tons of things happening on them, like servers, can generate so much randomness.

      Comment


      • #13
        Originally posted by tga.d View Post
        I certainly don't fully understand QM, but I was under the impression that the effects are pretty much undeniably empirically random -- even if there is some "deterministic" explanation as to why, you can't use it to predict outcomes with anything beyond probabilistic certainty.
        I don't know that probabalistic and randomness are the same thing. IIRC, the nsa had a probablistic exploit for some piece of software (something...about....key generation maybe?) but a random process would make any single event equally likely as any other while the nsa took advantage of certain specific values occuring more often than they should had the generator been random.

        Comment


        • #14
          Originally posted by liam View Post
          I don't know that probabalistic and randomness are the same thing. IIRC, the nsa had a probablistic exploit for some piece of software (something...about....key generation maybe?) but a random process would make any single event equally likely as any other while the nsa took advantage of certain specific values occuring more often than they should had the generator been random.
          Yes, a probabilistic process and a random process are the same. If we define "randomness" as entropy, since entropy is really all that matters in this setting, it's just that uneven probabilities make the entropy weaker. I have a blog post that sort of goes into the details in the math section, though not exactly:
          In cryptography, there’s a very important notion known as entropy. I should probably say right off the bat, this is only very indirectly related to the physics/chemistry/thermodynamics kind o…

          You don't have to read all that though, the important part is this:
          H(X) = P(x_1)*-log(P(x_1))+P(x_2)*-log(P(x_2))+...+P(x_n)*-log(P(x_n))
          where H(X) is the total entropy, and P(x_i) is the probability of a particular outcome. That alone proves that entropy is entirely defined by probability, but you bring up a good point about events that aren't equally likely, so let's compare a couple values.

          First, a flip of a fair coin, where we'll define the random variable X to be the function mapping "heads" to 0 and "tails" to 1 (I go a little bit into what that really means in the post, though again the specifics aren't really important).
          H(X) = P(0)*-log(P(0))+P(1)*-log(P(1)) = 1/2*1 + 1/2*1 = 1 bit of entropy (using log base 2). Exactly as expected: a flip of a coin adds one bit of entropy to the output.

          Now let's say it's a loaded coin, with one side, let's say tails, having probability 3/4 (and the other having 1/4, since the probability of all outcomes always add to 1).
          H(X) = P(0)*-log(P(0))+P(1)*-log(P(1)) = 1/4*2 + 3/4*0.415 = 0.811 bits of entropy. The astute observer will notice that 0.811 < 1. So making the coin a loaded coin reduced the entropy of the coin flip by almost 20%, *but it still has entropy*. If I flip my loaded coin twice, I have 1.62 bits of entropy, which is more than if I flipped the non-loaded coin once. It's fairly well known that you can add entropy values, but we'll go ahead and do the math to be sure on this. Define X to be two coin flips, mapping heads to 0 and tails to 1 (e.g. "heads tails" is 01 and "tails heads" is 10).
          Two flips of a fair coin:
          H(X) = P(00)*-log(P(00)) + P(01)*-log(P(01)) + P(10)*-log(P(10)) + P(11)*-log(P(11))
          = 1/2*1/2*2+1/2*1/2*2+1/2*1/2*2+1/2*1/2*2 = 2 bits of entropy. Again, just as expected.

          Two flips of the unfair coin:
          H(X) = P(00)*-log(P(00)) + P(01)*-log(P(01)) + P(10)*-log(P(10)) + P(11)*-log(P(11))
          = 1/4*1/4*4+1/4*3/4*2.42+3/4*1/4*2.42+3/4*3/4*0.830 = 1.62 bits of entropy, yet again the value that was expected.

          The point is, you can use events that don't have equal probability as a source of entropy. So then, what happened with the NSA, taking advantage of non-equally likely events? Well, the problem doesn't occur because the events aren't equally likely; the occur because the people using those events assume that they actually were equally likely, so the entropy they had was actually less than what they thought it was. If I need 100 bits of entropy, and I'm using coin flips, I'm going to assume that I need 100 coin flips. If it turns out though that the coin was the loaded coin, I'd only really have 81 bits of entropy, thinking I had 100, and that's a big problem. But if I know that the coin is the loaded coin, I can just use 124 flips and have 100.6 bits of entropy at my disposal, just as good as if I'd used the fair coin (just that I need 24 additional flips). So as long as you know the probability of each event you're using to generate your entropy pool, you can use whatever sort of event you'd like (though ones with more even distributions will generate entropy faster).

          Comment


          • #15
            It is indeed faster, and most importantly more NSA friendly

            PS: I don't really mean this

            Comment

            Working...
            X