Announcement

Collapse
No announcement yet.

It Didn't Make It For Linux 4.13, But A New Random Number Generator Still In The Works

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • It Didn't Make It For Linux 4.13, But A New Random Number Generator Still In The Works

    Phoronix: It Didn't Make It For Linux 4.13, But A New Random Number Generator Still In The Works

    Frequent Phoronix readers may recall that for more than one year a new Linux Random Number Generator has been in-development and today marked the 12th version of these patches being released...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    This is one of the most important places for peer review and code auditing that exists. With open source the idea is not to have to trust the authors to be able to trust the code due to the possiblity of review by mutually opposing parties. I took a quick look at this code, the most important thing is that they do not seem to be mapping the hardware RNG random number generator to /dev/random by itself like what Intel begged the kernel devs to do. The extent to which Intel wanted that implied a backdoor (probably for the NSA) that the linux random number generator was making useless.

    As I recall, the use or proposed use of that in Linux is/was to XOR the output of the hardware RNG with the software one. Since knowing one input to Xor cannot predict the output, the result would be always at least as much entropy as the stronger of the two RNGs with any predictable sequences destroyed. The worry I heard was that Intel may have been trying to export a CPU serial number via the RNG to make all https posts traceable to those with NSA access. That would have only worked in Windows because of the XOR with the software RNG in LInux.

    Comment


    • #3
      Upstreaming this goes against the kernel design patterns. I'll go on record and say this code will never hit mainline, no matter how well it's written.

      Comment


      • #4
        Originally posted by Luke View Post
        The extent to which Intel wanted that implied a backdoor (probably for the NSA) that the linux random number generator was making useless.

        the backdoorability of these(at the microcode level) has been proven and demonstrated in conferences.

        As I recall, the use or proposed use of that in Linux is/was to XOR the output of the hardware RNG with the software one. Since knowing one input to Xor cannot predict the output, the result would be always at least as much entropy as the stronger of the two RNGs with any predictable sequences destroyed.

        In theory, when the XOR is performed as it should, that should be true.

        In practice, that what was attacked in the conference's proof of concept hostile microcode.

        The second, unknown input (but at that precise moment leaving in a predictable registere) gets mixed into the known input in such way that the output of the xor is predictible (or was it that the unknown content of the register got overwritte ? can't remember exactly now).

        The current best practice is currently to make a hash out of different separate buffers.

        More difficult to attack by hacking the hardware RNG.

        Comment


        • #5
          The main problems with the existing linux /dev/random and /dev/urandom are that they give up too little and too much entropy.

          If /dev/random PRNG seed were intermittently XOR'd with data from /dev/urandom, for example when /dev/urandom gets hardware random entropy added to it, its quality would be drastically increased.

          The best thing for /dev/urandom without a true HRNG would be that it accumulate a vast amount of entropy (ie megabytes of it) and unlike now, never actually give it to the caller. Instead it should only give a derivative of its value, ie a cryptographically secure hash of the entropy pool (or a rotating portion of it) with a serial number for the number of calls, and the entropy pool could be written out periodically. Then the quality and volume of random output would be effectively infinite, even at boot time, and the strength of the output would be superior to an XOR of the existing /dev/random and /dev/urandom, both in quality and speed.

          If the entropy pool is large then low quality sources of entropy can be fed to it without fear. ie the TSC at every syscall() would provide a huge amount of low quality entropy, which would add up to a huge amount of excellent entropy over time. Diskless/stateless systems would still have the same problem they currently do but after a network boot I'm pretty sure it would have generated sufficient entropy in any true physical network. Even an emulated set of machines with an emulated network would be pretty difficult to guarantee that all timings are identical for an emulated host boot. Ie if the emulated set of machines were hosted on KVM, the host machine's scheduling and device latencies are going to affect the TSC at each syscall in the guests even if KVM kept separate TSCs for each guest, unless the scheduling of all hosts were performed in locked step and the CPU caches were initialized so that all the same identical cache misses occurred at all levels, and there was no physical storage involved, not even SSDs...
          Last edited by linuxgeex; 19 July 2017, 05:20 PM.

          Comment

          Working...
          X