If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.
I just patched all our production RHEL servers today. The fixed version is still version 1.0.1e on RHEL6. Red Hat implemented a fix and did not increment the OpenSSL version number.
I'm not being paranoid or anything... But I consider this as a not so honest mistake. NSA probably have found a better exploit than heartbleed and we're all none the wiser.
There is also a workaround for the affected versions. Recompile with "-DOPENSSL_NO_HEARTBEATS" as a compile time option. It's possible that the Ubuntu patched version just recompiled with that feature disabled (which is what redhat/centos have done with version 1.0.1e)
And indeed the Heartbeat extension looks completely stupid and useless to begin with.
- There are other ways to keepalive a running SSL/TLS connection. Heartbeat doesn't bring anything new.
- If someone *REALLY* needs a custom payload in a heartbeat, they should have gone for a fixed size (a 64bit or 128bit number, for example). That would have been much more efficient and still customisable enough (could hold a 128bits GUID, for example).
- If someone *REALLY* needs a variable size payload, the extra length parameters is redundant (and is really begging for such mistake to happen). It would have been much more simpler to consider the payload like "up to the end of the current packet".
I'm not being paranoid or anything... But I consider this as a not so honest mistake. NSA probably have found a better exploit than heartbleed and we're all none the wiser.
I would consider that *the programmer* probably made a honnest mistake. In fact the standard is practically calling for one to happen.
(At least in any package that relies on simple C-pointer-manipulation and standard C-library function, like OpenSSL. Of course if one uses another technique which does bound checking, such as C++...)
This error is typical, and saddly automatic checking tools didn't pick that specific instance. (Luckily, since then they got better)
I find the standard *very suspicious*. As pointed above, I don't really see what problem they want to solve with heartbeats, and they way the standard was written is stupid and practically calling for this problem.
Given that the NSA is known to influence standards (as per Snowden), it might have been their plan all along.
A lot of fanboys told us to use OpenSSL when a vulnerability in GnuTLS was found. I hope this shut them up. Software is never perfect.
And both OpenSSL and GnuTLS are pretty fine, as is also mozilla's NSS: They are all very widely used libraries, meaning that they get enough attention and bugs are eventually discovered and patched (as was the case with openssl. think about what monstrosities are lurking inside microsoft windows. nobody could pratically know. I mean nobody outside NSA).
But it would be good if:
- automatic testing tools got better at spotting such problems (and some got worked on)
- security critical things like crypto could rely on bases a little bit more secure (use bound-checking library instead of direct pointer manipulation).
And indeed the Heartbeat extension looks completely stupid and useless to begin with.
- There are other ways to keepalive a running SSL/TLS connection. Heartbeat doesn't bring anything new.
- If someone *REALLY* needs a custom payload in a heartbeat, they should have gone for a fixed size (a 64bit or 128bit number, for example). That would have been much more efficient and still customisable enough (could hold a 128bits GUID, for example).
- If someone *REALLY* needs a variable size payload, the extra length parameters is redundant (and is really begging for such mistake to happen). It would have been much more simpler to consider the payload like "up to the end of the current packet".
I would consider that *the programmer* probably made a honnest mistake. In fact the standard is practically calling for one to happen.
(At least in any package that relies on simple C-pointer-manipulation and standard C-library function, like OpenSSL. Of course if one uses another technique which does bound checking, such as C++...)
This error is typical, and saddly automatic checking tools didn't pick that specific instance. (Luckily, since then they got better)
I find the standard *very suspicious*. As pointed above, I don't really see what problem they want to solve with heartbeats, and they way the standard was written is stupid and practically calling for this problem.
Given that the NSA is known to influence standards (as per Snowden), it might have been their plan all along.
And both OpenSSL and GnuTLS are pretty fine, as is also mozilla's NSS: They are all very widely used libraries, meaning that they get enough attention and bugs are eventually discovered and patched (as was the case with openssl. think about what monstrosities are lurking inside microsoft windows. nobody could pratically know. I mean nobody outside NSA).
But it would be good if:
- automatic testing tools got better at spotting such problems (and some got worked on)
- security critical things like crypto could rely on bases a little bit more secure (use bound-checking library instead of direct pointer manipulation).
Writing standards is almost like writing their own rules. If they influence the rules enough, then it allows them free reign because it's their rules to begin with. Almost like the law supersedes the code. The problem is, if the software has holes everywhere due to poor standards, it's not just NSA that will take advantage of loopholes. Ie the public is then no longer protected from anyone who takes advantage of such holes.
Also if the programmer honestly didn't intend to leave the hole, yet the policy creates the opportunity for more mistakes, then the programmer gets the blame and the real cause is the policy. A seemingly sly way to hide from such accusations. Maybe a policy update is in order.
Also if the programmer honestly didn't intend to leave the hole, yet the policy creates the opportunity for more mistakes, then the programmer gets the blame and the real cause is the policy. A seemingly sly way to hide from such accusations. Maybe a policy update is in order.
If you're writing code that's supposed to be secure and robust, you don't trust external data.
Ever.
The spec looks crazy, but any programmer working on this kind of code should know to sanitize incoming data before using it.
If you're writing code that's supposed to be secure and robust, you don't trust external data.
Ever.
The spec looks crazy, but any programmer working on this kind of code should know to sanitize incoming data before using it.
Yeah but it's always a (calculated) gamble. If the outside data is so evil that it can not be trusted, then we should all disconnect from the internet at once.
Comment