See Coverity's post about the problem (Note: They've fixed it now and are able to pick up the problem).
(If I was the author of a static analysis tool, I would very likely be following OpenSSL Valhalla Rampage. Probably an upcoming treasure trove of new idea to test code for).
Netiher would unit-tests. Existing test units will still pass once the feature is implemented, and newer test units won't necessarily test wrong input.
This was only discoverable with input fuzzing.
That would make it a bit more complicated to backport some fixes into upstream OpenSSL, or to start developing and maintaining multi architecture-ports in parallel with LibreSSL while meanwhile LibreSSL itself is intensively developped.
I'm much more partial toward the reducing of platform support.
- Targeting OpenBSD only and not Linux too (or generic POSIX platform) seems problematic to me, because there are tons of software running on Linux and using SSL. If they kept Linux as a potential target, that means that some crazy gentoo guy somewhere might be trying to see if it's possible to rebuild the whole gentoo targeting LibreSSL instead of OpenSSL (and thus testing that it can still function as a drop in replacement).
Of course, if they are generic enough in their approach to the OpenBSD target, it might be the case that testing Linux software is already doable.
Note that in security application like this, there might be higher platform dependence than usual: encryption libraries need to be able to flag specific block of memory not to be swapped to disk, not to leave the registers/the cache, etc. to avoid side channels.
(And in the long term, Windows might be useful. Although openSSL isn't the dominant solution *on windows* (probably that most use Microsoft-provided facilities for crypto), windows is a very frequent platform on the desktop, so that getting free software to run on it might still be relevant. Though on the other hand, most of this software is probably compiled using MinGW or Cygwin and thus require a bit less hacks than VisualC)
- Optimisation: One upon a time, there existed weird platforms where malloc wasn't good and where basically anybody did reimplement it for faster performance.
You can still see discussion about this subject back circa 1997. Of course, nowadays this point tends to be moots.
- Certification: as mentionned elsewhere in this discussion, some certifications might require OpenSSL to be able to auto-analyse its own memory to assert if some tampering has been happening.
Nowadays such things are better handled by the OS.
(If the memory manipulation where automatically bound checked, Heartbleed couldn't happen in the first place. The memory copy would simply be prevented of going out of the bound buffer containing the packet).
Switching to a different language having the facility built in *could* be a possibility (but then you hit another problem, some of the facilities might not be written in a way that is immune to side channels. String comparison is a typical example)
Writing a support library in C is another.
and include systematic input fuzzing testing everywhere.