Calm down, that is grossly overblown.
Phoronix: LZO & LZ4 Security Vulnerabilities Disclosed
The latest open-source security issues uncovered affect LZO and LZ4 and the issues run back years...
Calm down, that is grossly overblown.
1) LZO is here for a while. It used by many programs. Reverse lookup in ubuntu repos gives 30+ dependants. I guess there're far more lurking around. So its hard to tell what exactly can be affected. Fortunately, bug mostly allows to cause crash but seems to be hard to upgrade to code execution since its not possible to supply meaningful code in process, you can just corrupt some memory. However, while it is hard to use, historically some exploits were able to (ab)use "just" memory corruption to trigger other undesired activity. So it is better not to underestimate these bugs.
2) In LZ4 it somewhat worse: it looks like bug allows attacker to supply meaningful code as part of data and then execute it. So it can be upgraded into code execution. Fortunately, LZ4 author is right that most programs are using LZ4 in ways it would not work due to smaller block sizes - 16M chunks is heck a lot. However, it's not like if there is full exhaustive list of all users of particular library and technically its possible to use LZ4 in ways allowing to exploit that bug. Since it allows code execution, it could be unpleasant.
So it is really wise idea to update these 2 libs. Before blackhats will figure out what could be pwned and how to do it. After all, updating these libs gives no disadvantages.
And curious fact is that LZO and similar algos are here for about 20 years. But somehow it has only came to attention recently that some properties of compressed data streams could in some cases be abused in really uncommon ways to fool decompression engine.
Last edited by 0xBADCODE; 06-28-2014 at 02:16 PM.
Of course ingenious people might find ways to turn that mischief into a real exploit, but I actually doubt that.
This is more a expression on how things have changed in the last 20 years... away from "safe as much RAM as possible" to "as secure as possible". I actually think that shift is a good thing, but I also find the media coverage hugely out of proportion to the issue at hand. Some articles I read make it sound like this is a bug that lets hackers pawn the curiosity rover on Mars!
<Fixed some typos>
Implies that this attack would be nearly impossible, and states that on any 64 bit operating system(like most of mine) the vulnerability does not exist at all. It roughly boils down to this:
It would require the targetted program to feed at least 16MB blocks to LZO or LZ4, but most programs using these cut off at 8MB and error out on oversized blocks. In essence a form of external bounds checking is being used. OK, now the attack requires socially engineering someone into opening some kind of self-extracting tarball using custom code that calls LZ0 directly and does not error out on the oversized blocks. Possibly the files would claim to contain a directory full of celebrity nudes?
The article states that the main reason to fix this vulnerablity is exactly that: the possibliity that custom code in the future might not limit block sizes and would transform this from a theoretical to a real vulnerability. Example: we don't know what kind of video codecs might be written in the future, possibly for 4K video, possibly as a result trying larger block sizes.
Fortunately, LZ-based algos are different. They are simple. That's what makes its really uncommon and noteworthy to encounter bugs in LZ stream parsing. Even more unusual if they're lurking around for 20 years.
There were examples of "indirect" attacks, where attacker can mount proper playground in indirect ways with limited tools by tricking legitimate code to do something that helps attacker's plan. So I would consider such bugs unsafe.Of course ingenious people might find ways to turn that mischief into a real exploit, but I actually doubt that.
Interestingly, it haves nothing to do with RAM usage. Bug mostly comes to insufficient input data validation. Fix wouldn't increase RAM usage. Btw, LZO (and IIRC "usual" LZ4) can be decompressed "without memory" at all. In sense it only takes input data and place to store output. But no other memory required (except maybe few bytes to keep some variables if they do not fit CPU registers). Also LZO and LZ4 compressors have compression levels with modest memory requirements (several kilobytes). So these algos are used even in small embedded systems with tight memory requirements, especially decompressors.away from "safe as much RAM as possible" to "as secure as possible".
The problem is different. Earlier, many computers were not directly connected to nets or had limited connectivity. Data sets they were processing were mostly fixed and came from (more or less) trusted sources. So devs can "just write code". Code had to implement desired logic. Without bothering self about corner cases and strange input. Now everything is networked, humans exchange huge amounts of data. And there is trouble on the way: these data can't be trusted anymore...
So, now it's not your best friend gives your program data to chew on. Its now your worst enemies are seeking for free resources and valuable data. And they will feed your programs with all sorts of weird crap if it helps to achieve their goals. So devs should change their ways of thinking. External world proven to be hostile. You can expect absolutely worst things. Its not compressed stream. Its tool for attackers to control your decompressor. Should it succeed, attacker wins. Its not "just picture". It is format parser and compression decoder. Should parser or decoder allow self to be tricked into doing something wrong, attacker wins. Its not "just password" user enters into your web page. It can be SQL injection! Or something else, intended to fool internals of your program processing this input. It's no longer text you type in forum message. It can be JS code to overtake users to another server, or SQL injection, or something else. It no longer "just file" user uploads to your server. It can be script trying to integrate with your CMS by fooling CMS to execute this crap. Should is succeed, attacker will execute "remote shell" script and will gain some free resources and valuable data. You see, you can't trust incoming data anymore. And those who thought otherwise will face hard times.
These days world is full of automatic activity seeking for free resources and data to steal. Just set up your web server or read sshd logs. You will see what I mean. These things are working without sleep and rest. They know no mercy. You and your computer are just some free resources and valuable data for them. So every bug which can be abused, will be abused to get even more free resources and valuable data. This seriously increases requirements for external data validation and ability of programs to deal with uncommon corner cases.but I also find the media coverage hugely out of proportion to the issue at hand.
If the block sizes needs to be bigger then the 8mb the LZ4 file format specify as allowed in order for the exploit to work, why is this even considered a bug?
I could just as well say every encryption archive is compromised since people may use 1234...
Last edited by 0xBADCODE; 06-29-2014 at 10:10 PM.
Yeah, it's a theoretical risk that it is better to plug, just in case such a situation happen in the future.
However, the big headline on all those web sites pushing the same pre-formatted news supposes that it is an immediate risk, with immediate consequences for everybody. Now people will think that they have a dangerous exploitable, bug on their machine. It's great advertisement for the security firm which "disclosed" the issue in a spectacular way, but it's simply untrue. The problem is totally blown out of proportion. We are experiencing a perfect example of crying wolf.
For more info, read : http://fastcompression.blogspot.fr/2...s-move-on.html
Phoronix should now better