Announcement

Collapse
No announcement yet.

Linux 5.18 To Bring Many Random Number Generator Improvements

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Linux 5.18 To Bring Many Random Number Generator Improvements

    Phoronix: Linux 5.18 To Bring Many Random Number Generator Improvements

    WireGuard lead developer Jason Donenfeld has recently been spearheading many improvements to the Linux kernel's random number generator (RNG) code and building off the work found in Linux 5.17, the Linux 5.18 kernel will bring a lot more on this front...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    See this early RNG pull request of the "random" changes ready for the Linux 5.18 kernel. Jason Donenfeld has also published a PDF outlining the work on RNG improvements for the Linux 5.17 and 5.18 kernels.
    The "a PDF" link goes to https://www.zx2c4.com/projects/linux-rng-5.17-5.18/ which certainly is not a PDF. It's just a basic HTML webpage, and really the main meat of this post.

    Comment


    • #3
      When reading these news pieces, I always imagine someone comparing streams of random numbers and going: yup, this new one is much better :P

      Comment


      • #4
        What a great job Jason A. Donenfeld is doing with the CRNG! Not only improving the code, but also the comments and the documentation. It's really hard to read and understand the random.c code if you are not very familiar with C or the linux code in general. Latest changes are improving that as well.

        By the way, if you want to test by yourself the performance improvements, you don't need anything special, just dd. Here are the numbers I get on my laptop:

        dd bs=1M if=/dev/random count=1000 of=/dev/null status=progress (5 runs)

        linux 5.16.14
        94.1 ± 0.2 MB/s

        dd bs=1M if=/dev/random count=4000 of=/dev/null status=progress (5 runs)

        linux 5.17.0-rc8
        519.6 ± 0.8 MB/s

        Now the other question is about the quality of these random bytes. I run some quick tests using NIST suite and Dieharder, and I see no surprises. Of course, this doesn't mean they are "good", but at least, nothing is totally broken by the changes.

        Anyone knows if they run "quality" tests on the CRNG in their pipelines?

        Comment


        • #5
          iyanmv Idk if it's scientific enough, but you can compare RNGs by looking at how fast you can converge while calculating pi using pin drops (https://quantumbase.com/calculating-...pping-needles/)
          I expect the differences will be minimal at best when comparing high-quality RNGs.

          Comment


          • #6
            Originally posted by zx2c4 View Post

            The "a PDF" link goes to https://www.zx2c4.com/projects/linux-rng-5.17-5.18/ which certainly is not a PDF. It's just a basic HTML webpage, and really the main meat of this post.
            But it *looks* like a PDF, in his defense :-)

            Comment


            • #7
              Originally posted by bug77 View Post
              When reading these news pieces, I always imagine someone comparing streams of random numbers and going: yup, this new one is much better :P
              I've actually done this, after developing new hashing algorithms once upon a time, and yeah: tongue in cheek or not, your comment is something that really does happen. In my case, the old hash function we were using (which was the generic GNU "x*43+y" simplicity) was pretty terrible, and if you examined the depths of the buckets it was easy to see that it was, with about half the range massively favored over the other half. I replaced it with hashes based on an LSFR of an arbitrary-length prime polynomial (terrible for crypto hashes, but excellent for this case) and leveled the distribution over about 95% of the space instead, with the same performance.

              It's quite a fun thing to evaluate, in fact. If you've got any sort of graphics familiarity, you can provide a visualisation of the bucketing as a heat map on the screen while you feed x days worth of data into it, and compare the various versions as you build them. I literally did almost exactly what you imagine: compared the *impact* of two different streams of (in this case very pseudo-) random numbers, and went: "yup, this new one is much better".

              Comment


              • #8
                Originally posted by arQon View Post

                I've actually done this, after developing new hashing algorithms once upon a time, and yeah: tongue in cheek or not, your comment is something that really does happen. In my case, the old hash function we were using (which was the generic GNU "x*43+y" simplicity) was pretty terrible, and if you examined the depths of the buckets it was easy to see that it was, with about half the range massively favored over the other half. I replaced it with hashes based on an LSFR of an arbitrary-length prime polynomial (terrible for crypto hashes, but excellent for this case) and leveled the distribution over about 95% of the space instead, with the same performance.

                It's quite a fun thing to evaluate, in fact. If you've got any sort of graphics familiarity, you can provide a visualisation of the bucketing as a heat map on the screen while you feed x days worth of data into it, and compare the various versions as you build them. I literally did almost exactly what you imagine: compared the *impact* of two different streams of (in this case very pseudo-) random numbers, and went: "yup, this new one is much better".
                What can I say, a programmer's life takes you places...

                Comment

                Working...
                X