Wrong. It is your knowledge and skills what makes you an expert in some area. In cryptography and security it also takes some special way of thinking: good cryptography experts and security gurus should be paranoid enough to be able to avoid common pitfalls. Unfortunately, OpenSSL devs proven they lack this skill, which is absolutely mandatory if you're about doing something security-sensitive. It is not even about heartbleed, it is about overall project management and how they deal with some "potentially unsafe areas". As a blatant example of incompetence: if OpenSSL asked for hardware crypto acceleration, OpenSSL is dumbass enough to use hardware RNG directly. No, they do not mix hardware RNG with other entropy sources. They just take RNG output and directly fed it to apps using OpenSSL. Should there be backdoor in RNG, so it not as random as it seems, all generated keys will be predictable. Just as it happened once in Debian already due to "small optimization", which caused need to urgently rekey thousands of machines. It is so lame and blatant incompetence, that even just good devs who haven't lost their ability to think are able to pinpoint it. For example, Linux kernel devs were scared by whole idea to use just hardware RNG as single entropy source. Let's say they do not even call themselves experts in security and crypto. But are able to understand such basic things, unlike "experts" from OpenSSL. And its not joke. Grab recent source of Tor and read changelog: Tor is security sensitive stuff for sure. So Tor devs were forced to use own workarounds to fix OpenSSL idiocy. Cool, isn't it? Do you honestly things such solution from "experts" could be secure at all? Bah, that's very unlikely. Complicated protocol? Tons of legacy cruft? Lame devs? Set Sail for Fail.
Originally Posted by Karl Napf
You see, it only takes 16MiB input data to cause wraparound. Let's say, by modern standards, 16Mb chunk of data isn't something terribly large. This in turn enough to wrap-around 32-bit register.
That was no bug 20 years ago. Back then it was a "32bit int is enough here as that can not overflow with the allowed block sizes". In fact I have trouble calling this a bug today.
It is twice in size bitwise bit cam hold 2^32 times larger numbers. Needless to say, 4G * 16M input is waaaaaaaay larger thing and never used and even troublesome to transmit at all.
Oh, it does. The problem is a 32bit int, a fix is using a 64bit int. That is twice the size:-)
Once again, decompression algo is one thing and format used to deliver data is another thing. So it is completely valid idea to ask decompression engine to decompress that 16Mb chunk. But it is not valid if things will go boom instead. Decompression engine like this supposed to be generic enough.
A block size > 8MiB is not supported and nobody mentioned any implementation that does not do so,
In decompression engine - LZ stream parsing failed to evaluate what happens if there will be more than 16M of specially crafted data.
so where is the input validation missing?
It's rather really noteworthy example how "better save than sorry" principle has been disregarded and forced some people to get nervous (ffmpeg/libav even released patches to plug hole if I remember correctly).
I do consider LZO/LZ4 to be a good example of making sure the input is sane, considering that I did not see anybody showing a concrete implementation that is broken.
From what I remember, ffmpeg has been potentially affected by LZO issue. While it not easy to exploit in practical ways, they had to release patches.
a vulnerable piece of code, which you most likely need to write yourself since nobody mentioned broken users of LZO/LZ4 yet.
Sure, but if you just write some code dealing with compression of data, last thing you may want to encounter is unexpected failure in decompression engine.
If your users run code random people mail to them, then LZO/LZ4 is the least thing you need to worry about.