Linux Kernel Works To Make Better Random Reseeding
While /dev/random was made faster and more random in Linux 3.13, in light of the NSA controversies and that Intel/VIA hardware encryption and random generators may not even be trustworthy, there's been a rework in how reseeding happens for the Linux kernel's random component.
Greg Price sent out a patch-set on Saturday that changes how re-seeing the non-blocking pool happens, which supplies /dev/urandom and the kernel's internal randomness needs.
Greg wrote, "The most important change is to make sure that the input entropy always comes in large chunks, what we've called a 'catastrophic reseed', rather than a few bits at a time with the possibility of producing output after every few bits. If we do the latter, we risk that an attacker could see the output (e.g. by watching us use it, or by constantly reading /dev/urandom), and then brute-force the few bits of entropy before each output in turn...After the whole series, our behavior at boot is to seed with whatever we have when first asked for random bytes, then hold out for seeds of doubling size until we reach the target (by default 512b estimated.) Until we first reach the minimum reseed size (128b by default), all input collected is exclusively for the nonblocking pool and /dev/random readers must wait."
More details on this random re-seeding rework for the Linux kernel can be found on the kernel mailing list and could appear in the Linux 3.14 kernel.
Latest Linux Hardware Reviews
Latest Linux Articles
Latest Linux News
Latest Forum Discussions