Is kernel able to at least distinguish nature of CPU and understand current CPU is non-OoO for example, to not rely on that thing? Right now we already have enough of nasty "poor random" problems. It could be as bad as "I would guess your router's WPS PIN code in 3 attempts" or so - thanks to really low entropy levels and vendors ignorance to do something about that.
The prob is that on one side it rather convenient for usermode to just call kernel to get some random. If that turns to be bad though, it could probably cause a lot of pwnage in future.
- Usermode dev wants: "give me a lot of secure random, I don't care how".
- Kernel wants ... to fulfill that request without blocking for a while, but it something that not to be taken as granted. Especially in VMs and embedded.
At this point there is chance kernel assumptions about security of random would mismatch usermore assumptions.
Announcement
Collapse
No announcement yet.
It Turns Out CPU Speculative Execution Can Be Useful For Random Entropy / RNG
Collapse
X
-
Originally posted by mulenmar View PostPoint is: I'll happily take the performance of a first-gen Intel Atom, as long as it's still multi-core and 64-bit. The performance is a software and dataset issue.
Of course, like you said, it really comes down to what level of performance find acceptable, i.e. what software and data you need to deal with. I would say that time has passed first generation Atom and it's not coming back. Also, they were rarely 64-bit or multi-core (and dual-core at best).
I just find microarchitectures interesting.
Leave a comment:
-
Originally posted by mulenmar View PostForgive my cryptography ignorance in my assumptions, please... but my understanding (from the "can we even trust Intel's hardware RNG?" discussions) is that the kernel combines together multiple sources, yes, and XORs them? Both strong and weaker, software- and hardware-based solutions. Does using speculative execution or jitter entropy weaken the results?
Simple example:
You go to an amusement park and buy a raffle ticket.
The number you get is one in a sequence printed on those tickets.
Having no other knowledge, the ticket number you purchased is for all intents entirely random.
One of the amusement park staff has pre-determined the winning ticket number
If they can ask people who are buying tickets what the number they bought was, then they can predict when would be a good time to buy tickets, in order to increase their chance of buying the winning ticket.
If a PRNG holds enough bits such that you can sample as much as you need without ever exposing enough knowledge of its contents to be able to understand what future values it will generate, then it is effectively truly random. There's 2 ways to achieve that, and I'll explain them in physical terms.
1) you have an infinite-sized pot of identically-behaving black and white marbles, thoroughly mixed up, and you dispense them as required, as bits, to form the size of requested entropy samples.
2) you have a pot with 1 million black and white marbles. Each time you take marbles out you put an equal number of marbles in, in the same proportion, and then stir the pot.
System 1 is a truly random system. There's no possibility of it coming up with a repeatable pattern. It cannot be predicted.
System 2, if you requested a million marbles at once, would be emptied and then filled and end up with the identical order of contents if it was 'stirred' in the same way.
If, with System 2, you only ever handed out 32 marbles at a time, and if once in 100 times you flipped a coin and used the result to change the order you put the black & white marbles into the pot, then over the course of the roughly 300,000 samples taken from the pot there would be 3000 random sources of entropy fed into it, blended throughout, each inter-dependent on the results of the other 2999 random bits of entropy. Even if the result of the coin flip was observed 50% of the time, there would still be effectively 1500 unknowable bits of entropy affecting the distribution of the black and white marbles in the pot, which is more than there are marbles in the pot, and therefore no number of samplings will ever reveal the behaviour, therefore its output cannot ever be predicted, just the same as the infinitely large pot.
So with enough samples of a poor entropy source, it's still possible to create a very strong entropy source.
You just have to develop a system that doesn't expose enough information to be able to model its behaviour based on its output and what is known about its input.
- Likes 2
Leave a comment:
-
Originally posted by Weasel View PostImagine calling speculative execution a bug when it's the reason your CPU is not a first gen Intel Atom.
The problem is the heavyweight web pages -- why the **** do we need to download so much ECMAScript and huge images, pages weighing in at more than DOOM 1 each? -- and the excessive client-side processing and animations. And the utter b***s*** auto-loading, auto-playing videos -- didn't we learn the lesson from the <MARQUEE> tags and auto-playing MIDI files of the 90s? NOBODY WANTS THAT CRAP ON LIMITED DATA PLANS. If THEN.
OH, and the pathetic excuses for GPUs they were paired with didn't help either. I still remember the GMA500 my old Atom netbook had... PowerVR-based. Useless for anything heavier than a 2008-era webpage, and even then you had to turn down the YouTube video size to play smoothly. That was when the driver even worked, btw.
Point is: I'll happily take the performance of a first-gen Intel Atom, as long as it's still multi-core and 64-bit. The performance is a software and dataset issue.
Originally posted by oiaohm View PostThe idea of using speculative execution for random is not a highly good idea. Has all the same problems as jitter entropy.Last edited by mulenmar; 30 September 2019, 10:09 AM.
- Likes 2
Leave a comment:
-
Originally posted by cl333r View PostWho would have thought that in 2019 random number generation would be such a scarce feature that devs would scavenge for CPU bugs to implement it.
Leave a comment:
-
Originally posted by cl333r View PostWho would have thought that in 2019 random number generation would be such a scarce feature that devs would scavenge for CPU bugs to implement it.
The Linus Response is correct. You implement jitter entropy correctly you are not depending on speculative execution. Speculative can add more randomness to jitter entropy or in other cases less randomness.
The idea of using speculative execution for random is not a highly good idea. Has all the same problems as jitter entropy.
Leave a comment:
-
It seems a bit silly to be so suspicious of RDRAND because Intel *might* be able to predict its value, only to replace it with this where Intel very likely can predict its value.
- Likes 3
Leave a comment:
-
If it turns out that the whole "speculative execution as a performance win (with craptastic security issues)" was really "preliminary research into cheap random number generators on modern CPUs", that's gotta be the most expensive "cheap" ever in the history of software and hardware so far.
- Likes 6
Leave a comment:
-
Originally posted by cl333r View PostWho would have thought that in 2019 random number generation would be such a scarce feature that devs would scavenge for CPU bugs to implement it.
- Likes 3
Leave a comment:
-
Who would have thought that in 2019 random number generation would be such a scarce feature that devs would scavenge for CPU bugs to implement it.
- Likes 7
Leave a comment:
Leave a comment: