Announcement

Collapse
No announcement yet.

OpenSSL Forked By OpenBSD Into LibreSSL

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    Originally posted by garegin View Post
    I agree. Although I don't like forks, the OpenBSD devs' hands were tied. Because MS or Apple either create the software themselves or keep internal forks. They are not going to sit tight and accept dangerous code from a third party. No one does that.
    But that's just it... forks are no different than branches, merges, tagging, versioning, etc. It's not something to be 'liked' or 'disliked'. It's an entirely neutral construct. I will concede that forking is a bit different in that there is a change of the governing entity, but that's not always entirely negative either.

    Nobody tied the hands of the OpenBSD developer(s). In fact, their ability to fork demonstrates that they were free to do so. I'm not sure were MS and Apple entered the discussion, as this has nothing to do with them, but you seem to indicate that their SSL implementations are somehow free from the acceptance of 'dangerous code'.

    It's as if I'm replying to one of those trollbots that uses keywords from previous thread posts to construct english sentences, but is intellectually absent when it comes to the meaning of the words or subject matter.

    Comment


    • #52
      Originally posted by gamerk2 View Post
      Not that hard to test for either. I see this as a failure to maintain proper testing procedure following a code change.
      Except that, automated code analysis DID NOT pick this problem.
      See Coverity's post about the problem (Note: They've fixed it now and are able to pick up the problem).
      (If I was the author of a static analysis tool, I would very likely be following OpenSSL Valhalla Rampage. Probably an upcoming treasure trove of new idea to test code for).

      Netiher would unit-tests. Existing test units will still pass once the feature is implemented, and newer test units won't necessarily test wrong input.

      This was only discoverable with input fuzzing.


      Originally posted by archibald View Post
      1) They use CVS because they like it. I don't know why, but I doubt it really matters.
      Well the problem is that CVS (and SVN, etc.) are centralized source control systems, and unlike the distributed ones (Git, Mercurial, or Canonical's Bazaar) they aren't that good at forking/mergin/rebasing multiple versions.
      That would make it a bit more complicated to backport some fixes into upstream OpenSSL, or to start developing and maintaining multi architecture-ports in parallel with LibreSSL while meanwhile LibreSSL itself is intensively developped.


      Originally posted by Veerappan View Post
      Instead, they decide to just prune out a bunch of deprecated features and reduce platform support.
      I totally agree with them removing deprecated feature. That makes the code simpler to analyse and debug.
      I'm much more partial toward the reducing of platform support.
      - Targeting OpenBSD only and not Linux too (or generic POSIX platform) seems problematic to me, because there are tons of software running on Linux and using SSL. If they kept Linux as a potential target, that means that some crazy gentoo guy somewhere might be trying to see if it's possible to rebuild the whole gentoo targeting LibreSSL instead of OpenSSL (and thus testing that it can still function as a drop in replacement).
      Of course, if they are generic enough in their approach to the OpenBSD target, it might be the case that testing Linux software is already doable.
      Note that in security application like this, there might be higher platform dependence than usual: encryption libraries need to be able to flag specific block of memory not to be swapped to disk, not to leave the registers/the cache, etc. to avoid side channels.
      (And in the long term, Windows might be useful. Although openSSL isn't the dominant solution *on windows* (probably that most use Microsoft-provided facilities for crypto), windows is a very frequent platform on the desktop, so that getting free software to run on it might still be relevant. Though on the other hand, most of this software is probably compiled using MinGW or Cygwin and thus require a bit less hacks than VisualC)

      Originally posted by Ericg View Post
      The biggeer issue at hand is OpenSSL replacing system calls (such as malloc) with their own custom versions for one reason or another.
      Two main reason:
      - Optimisation: One upon a time, there existed weird platforms where malloc wasn't good and where basically anybody did reimplement it for faster performance.
      You can still see discussion about this subject back circa 1997. Of course, nowadays this point tends to be moots.
      - Certification: as mentionned elsewhere in this discussion, some certifications might require OpenSSL to be able to auto-analyse its own memory to assert if some tampering has been happening.
      Nowadays such things are better handled by the OS.

      Originally posted by liam View Post
      Too bad rust isn't stable yet because that would be an ideal choice for rewrite using best practices.
      This library is too important to be unverified.
      Although not specifically Rust, I would also vote for taking the opportunity to change some practice. For example, replace the simple pointer manipulation and standard C-library calls with something that gets automatically bound checked (i.e.: replace plain buffers with something that hold point to data AND size of buffer)
      (If the memory manipulation where automatically bound checked, Heartbleed couldn't happen in the first place. The memory copy would simply be prevented of going out of the bound buffer containing the packet).

      Switching to a different language having the facility built in *could* be a possibility (but then you hit another problem, some of the facilities might not be written in a way that is immune to side channels. String comparison is a typical example)
      Writing a support library in C is another.

      and include systematic input fuzzing testing everywhere.

      Comment


      • #53
        Originally posted by DrYak View Post
        - Targeting OpenBSD only and not Linux too (or generic POSIX platform) seems problematic to me, because there are tons of software running on Linux and using SSL.
        Indeed. I often find it puzzling whenever someone develops for a platform that I do not personally benefit from ;-) I honestly believe that they fully intend to support porting efforts 'after' they have established a sane and functional application.

        The post you were responding to seemed to indicate that the purpose of the fork was to "prune out a bunch of deprecated features and reduce platform support". This is demonstrably not the case.

        Comment


        • #54
          Originally posted by russofris View Post
          Indeed. I often find it puzzling whenever someone develops for a platform that I do not personally benefit from ;-)
          It's not as much about personal benefit as about targeting a big enough test-pool.

          I fully understand that while "have it compile under every single known platform, even those who predate the invention of the wheel and fire" is a bad idea (proponent of explosive portability claim that it helps diagnostic corner case, OpenSSL is a nice demo that it bring more dangerous cruft actually).

          But I would still vote to maintain compatibility with major test targets.
          Heck, if OpenSSL hapenned to be *hugely popular* on Windows (that not currently the case) I would still consider keeping Windows ports for the sake of a very frequent test.

          In short:
          - compiling it for 4163 different target, 78% of which are currently considered extinct = bad idea
          - increasing the compile target from 1 to 3 = could potentially be a good idea on the condition that it brings 30x more stability tests
          Of course that requires a cost/benefits analysis. Cost being portability cruft, benefits being more testing and feedback from comunity.


          Originally posted by russofris View Post
          I honestly believe that they fully intend to support porting efforts 'after' they have established a sane and functional application.
          And currently, it seems that the difference between OpenBSD and other systems are minimal. They basically amount to debate around secure standard C lib string manipulation function.
          - old standard mandate strncpy, strncat, etc. (which do not necessarily put a guarding '\0' at the end of all string).
          - OpenBSD has introduced strlcat, strlcpy, etc. which have a bit more secure behaviour (they enforce the end '\0' no matter what).
          - glibc people have argued against it (on the grounds that it can allow bad code to run through, code that should be catched by static analysis anyway)
          - newer C11 standard introduce strcpy_s strcat_s, so glibc would be brought kicking and screaming into modern secure world.
          - LibreSSL would simply require a thin wrapper between the 2 sets of secure functions.

          So it might be possible for some courageous linux folk to already keep an eye on it.
          Adding a few similar thin wrappers as needed.

          Comment


          • #55
            Originally posted by DrYak View Post
            - compiling it for 4163 different target, 78% of which are currently considered extinct = bad idea
            - increasing the compile target from 1 to 3 = could potentially be a good idea on the condition that it brings 30x more stability tests
            Of course that requires a cost/benefits analysis. Cost being portability cruft, benefits being more testing and feedback from comunity.
            They do intend for the end result to be portable, but they want the code that makes it portable to be added to a clean, stable base, rather than added now when it's still undergoing massive changes. They keep OpenBSD ported to multiple hardware platforms to catch bugs and undefined behaviours. From the OpenSSH website, explaining that it is developed by 2 teams:

            One team does strictly OpenBSD-based development, aiming to produce code that is as clean, simple, and secure as possible. We believe that simplicity without the portability "goop" allows for better code quality control and easier review. The other team then takes the clean version and makes it portable (adding the "goop") to make it run on many operating systems
            I'd be gobsmacked if LibreSSL didn't follow the same pattern.

            Re: comments on VCSs: I meant that I don't know why they like CVS - I'm *painfully* aware of what it's like trying to manage branches with a centralised system - do NOT get me started on Perforce... :-)

            Comment


            • #56
              ........... aaaannd it's irrelevant:

              Comment


              • #57
                Originally posted by Jedibeeftrix View Post
                ........... aaaannd it's irrelevant:

                http://arstechnica.com/information-t...-fund-openssl/
                I'll just leave this here. https://gist.github.com/busterb/11265810


                Oh, and you also missed everything else that we were talking here, like how OpenBSD guys were frustrated with OpenSSL for a long time before heartbleed.

                Comment


                • #58
                  Originally posted by ua=42 View Post
                  Good question. Backwards compatibility must of been king because they did allot of bad coding practices in order to maintain compatibility with really antique systems. That is the reason they had their own implementation of malloc, which is why automated testing tools didn't detect the heartbleed bug. (I am really curious as to the reason someone re-implemented printf in the OpenSSL code)

                  Hopefully after OpenBSD gets done modernizing and simplifying the core code they will start working together on system support with a sane compatibility layer system. Either way, my hat off to the OpenBSD guys for taking on this much needed task.

                  ABSOLUTEly,

                  ...and while they're at it, maybe ibm/Redhat should hire these guys to clean up the arrogant mess left behind by those "systemd" folk- (aka Mr. and Mrs. Pulseaudio),
                  as has already been pointed out everywhere:
                  Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite


                  Comment


                  • #59
                    I'm mostly glad stuff is getting fixed by people who know anything, because I've been telling non-techs that it's perfectly all good to use their bank's website at McDonald's or a library because the website handles the security, not their wireless card-- even though everyone (incl. Windows) warns them that it's not safe like at home. So that was annoying, to be so retroactively wrong. Side note: it'd be fun to see somewhere "secured with OpenSSL" replaced with "cauterized with OpenSizzle".

                    Comment


                    • #60
                      Originally posted by rice_nine View Post
                      I'm mostly glad stuff is getting fixed by people who know anything, because I've been telling non-techs that it's perfectly all good to use their bank's website at McDonald's or a library because the website handles the security, not their wireless card-- even though everyone (incl. Windows) warns them that it's not safe like at home. So that was annoying, to be so retroactively wrong. Side note: it'd be fun to see somewhere "secured with OpenSSL" replaced with "cauterized with OpenSizzle".
                      And that's not only reason you were wrong. Just google, or search YouTube for Moxie Marlinspike defcon, ssl.

                      Comment

                      Working...
                      X