Announcement

Collapse
No announcement yet.

AtomicRNG: Feeding The Linux RNG With Java & Alpha Ray Visualizer

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by TAXI View Post
    //EDIT: Whoops, accidentially submitted this post before the final test result where ready, sorry, give me a few minutes.
    Code:
    Entropy = 7.998123 bits per byte.
    
    Optimum compression would reduce the size
    of this 296978 byte file by 0 percent.
    
    Chi square distribution for 296978 samples is 773.39, and randomly
    would exceed this value less than 0.01 percent of the times.
    
    Arithmetic mean value of data bytes is 127.2512 (127.5 = random).
    Monte Carlo value for Pi is 3.143769193 (error 0.07 percent).
    Serial correlation coefficient is -0.001030 (totally uncorrelated = 0.0).
    //EDIT:
    Originally posted by opensource View Post
    So in your implementation you may not use urandom, it's only pseudo faster and not more random.
    Don't worry, I use /dev/urandom just as example RNG for statistical analytics (speed, comparing its result against AtomicRNGs in test tools, ...).
    Last edited by V10lator; 20 January 2015, 07:09 AM.

    Comment


    • #22
      The ARNG only (if not mixed with random/urandom) results look good I'd say

      If we imagine a sensor which is 2x2 pixels big (imagine if that is what one is using), then an improvement would be using a bigger sensor. Now, you are of course not using 2x2 sensor, but I wonder if using an even bigger sensor would improve the randomness.

      Comment


      • #23
        Originally posted by opensource View Post
        If we imagine a sensor which is 2x2 pixels big (imagine if that is what one is using), then an improvement would be using a bigger sensor. Now, you are of course not using 2x2 sensor, but I wonder if using an even bigger sensor would improve the randomness.
        It would higher the throughput, not the randomness. No matter what size the CCD is, there's always a max. number. So we don't have to see the coordinates as a number but as a sequence of bytes, replacing the ones not random (zeroes before the actual number) or doing other tricks to extract the randomness from the limited number.

        Comment


        • #24
          Originally posted by opensource View Post
          The ARNG only (if not mixed with random/urandom) results look good I'd say
          Except the chi-squad Value. Tried to fix, new result:
          Code:
          Entropy = 7.999670 bits per byte.
          
          Optimum compression would reduce the size
          of this 503105 byte file by 0 percent.
          
          Chi square distribution for 503105 samples is 230.78, and randomly
          would exceed this value 85.96 percent of the times.
          
          Arithmetic mean value of data bytes is 127.6054 (127.5 = random).
          Monte Carlo value for Pi is 3.137221228 (error 0.14 percent).
          Serial correlation coefficient is 0.001009 (totally uncorrelated = 0.0).
          The bad thing is that this reduces the throughput. This stage of the entropy extractor might be the most fragile code block as it has a direct impact on the throughput as well as the randomness. That's one of the points I would love to talk with an expert in this area about as it's really hard to get as much throughput as possible while not violating the radomness.

          Comment


          • #25
            (Yes Chi wasn't as good, as you mentioned before.) Hm, that's interesting, I thought the random numbers come and no picking (selecting like every 100th number) is necessary.

            Comment

            Working...
            X