Announcement

Collapse
No announcement yet.

OpenSSH Clients Struck By New Security Vulnerability

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by caligula View Post

    What complexity and security bugs? Apparently you have no idea what you're talking about.
    Obviously I'm talking about complexity of implementation. Now, paste the name of your favorite language in google followed by "security vulnerability".

    Originally posted by caligula View Post

    > most languages lacks of the amount of exploit mitigation techniques added by a C/C++ compiler

    Does not parse.
    Search for exploit mitigation techniques.

    Comment


    • #12
      Originally posted by Daktyl198 View Post
      Question: do you mean that the LibreSSL people wrote the feature that causes this bug? Because I don't remember LibreSSL being a thing 6 years ago. So what exactly about them are you criticizing in regards to this bug?
      The OpenBSD core development community is fairly small - most of 'the LibreSSL people' are the same people who wrote and maintain OpenSSL. This bug is very, very similar to the Heartbleed one that sparked the LibreSSL fork.

      That said, IMO there's a big difference between a minor oversight (OpenSSH) and the entire codebase being filled with broken support for platforms that never even existed (OpenSSL).

      Very little of the LibreSSL devs' ridicule for OpenSSL was about the buffer-overflow itself, rather for all the gibberish they found in the code having been prompted to look at it:
      - "Big-endian x86_64 doesn't exist yet, but OpenSSL supports it anyway. Proactive coding!"
      - "If the size of socklen_t changes while your program is running, OpenSSL will cope! Also, if /dev/null moves."
      - "Well, even if time() isn't random, your RSA private key is probably pretty random."

      None of that's hypocritical - the OpenSSH codebase clearly isn't perfect, but there's no reason to believe it's fundamentally broken in anything like the same way.

      Comment


      • #13
        Originally posted by RavFX View Post
        Disabling a feature is not a fix.

        I use this feature a lot thanks to the crappiness of my Internet connection.
        Have you tried Mosh ?
        https://mosh.mit.edu

        Comment


        • #14
          Originally posted by FLHerne View Post
          Very little of the LibreSSL devs' ridicule for OpenSSL was about the buffer-overflow itself, rather for all the gibberish they found in the code having been prompted to look at it:
          - "Big-endian x86_64 doesn't exist yet, but OpenSSL supports it anyway. Proactive coding!"
          - "If the size of socklen_t changes while your program is running, OpenSSL will cope! Also, if /dev/null moves."
          - "Well, even if time() isn't random, your RSA private key is probably pretty random.".
          Sure, but allow me to think that this type of commenting on other people's code is pretty scummy. Take them in order: sure, there isn't a big endian x86 platform, but OpenSSL supports big endian systems, hence the macros are there (and you can compile them in if you are bored). Sure sizeof(socklen_t) is a compile time constant, but why make fun if someone uses pointer arithmetic in code? And is the last one meant to be an allusion to OpenSSL's random key generation not being good enough? Because I don't think it's true, even if time() probably is involved in the Secure RNG seeding.

          I don't have any technical disagreements with OpenBSD, what they are doing is probably alright (although I doubt they're even as good as Apple or Microsoft security-wise, they just don't have the market share to be a target). They do seem to have some major attitude problems IMO...

          Comment


          • #15
            Originally posted by FLHerne View Post

            The OpenBSD core development community is fairly small - most of 'the LibreSSL people' are the same people who wrote and maintain OpenSSL. This bug is very, very similar to the Heartbleed one that sparked the LibreSSL fork.
            I agree with the size of the team but... if most of the LibreSSL people were maintainers of OpenSSL, wouldn't they be criticizing their own work? Not to mention, wouldn't they have already pushed for LibreSSL-type of changes in OpenSSL while they were a part of the team? Or do you mean "OpenBSD developers" were part of the OpenSSL team, not "LibreSSL developers"? Because there is a difference in sample sizes and individuals.

            Do some googling and tell me what percentage of LibreSSL developers (prior to heartbleed) committed more than 2-3 patches to OpenSSL.

            Comment


            • #16
              Originally posted by Daktyl198 View Post
              I agree with the size of the team but... if most of the LibreSSL people were maintainers of OpenSSL, wouldn't they be criticizing their own work?
              Sorry, that should have been 'OpenSSH' there. I'd edit the post, but the timeout is still ridiculously short.

              Comment


              • #17
                This is why, as one who uses Qt and C++ all the time, its always a PITA to work in Linux code. Because so much of it is C, and thus lacks the language support for type, resource, and memory safety that C++14 now has. The problem of course still happens in any C++ project still requiring C++03 support, but very few maintained projects do that anymore. But having to work the mental gymnastics on raw pointers all the time when these are solved problems is frustrating in how there is this culture of C master race going on.

                A lot of projects like PulseAudio, Avahi, Systemd, Mesa, and the kernel are all huge goliaths of C code. Hundreds of thousands or millions of lines. Compared to Clang/LLVM, which use C++, the amount of insane boilerplate is out of control in all these projects to such an extreme degree because it runs out raw pointers and C stdlib do not scale to millions of lines well. I cannot even fathom how much more maintainable and less bugridden our tech stack would be if we were adopting the modern variant of C++ for all these codebases.

                GCC converted to C++ for pretty much this reason - when you are doing extreme optimization of code in a translation unit, having no generics or unwinding destructors was a huge pain for years.

                Comment


                • #18
                  Originally posted by zanny View Post
                  This is why, as one who uses Qt and C++ all the time, its always a PITA to work in Linux code. Because so much of it is C, and thus lacks the language support for type, resource, and memory safety that C++14 now has. The problem of course still happens in any C++ project still requiring C++03 support, but very few maintained projects do that anymore. But having to work the mental gymnastics on raw pointers all the time when these are solved problems is frustrating in how there is this culture of C master race going on.

                  A lot of projects like PulseAudio, Avahi, Systemd, Mesa, and the kernel are all huge goliaths of C code. Hundreds of thousands or millions of lines. Compared to Clang/LLVM, which use C++, the amount of insane boilerplate is out of control in all these projects to such an extreme degree because it runs out raw pointers and C stdlib do not scale to millions of lines well. I cannot even fathom how much more maintainable and less bugridden our tech stack would be if we were adopting the modern variant of C++ for all these codebases.

                  GCC converted to C++ for pretty much this reason - when you are doing extreme optimization of code in a translation unit, having no generics or unwinding destructors was a huge pain for years.
                  See this post here http://www.phoronix.com/forums/forum...512#post845512
                  The higher level languages can't solve any problems. They're worse cause rmiller says so.

                  Comment


                  • #19
                    Originally posted by caligula View Post

                    See this post here http://www.phoronix.com/forums/forum...512#post845512
                    The higher level languages can't solve any problems. They're worse cause rmiller says so.
                    He didn't say they can't solve 'any' problems. They enable us less capable and technical developers to develop code more efficiently, faster, and with less bugs (security holes too.). So for business, they are quite important. There are simply neither enough highly skilled people who understand low level techniques, nor there is enough time to wait for software written in C, Assembler or whatever. Besides these highly skilled developers would be way too expesive. People want to make money. It is better to create software (Make money) with security issues, than none (No money b/c there are competitors you know who also don't care.).

                    Yet everything else rmiller stated is true.

                    Comment


                    • #20
                      The point about needing to know what you're doing in C is as true today as it was 20 years ago. You can't just use the trial and error approach to writing your code because you'll end up with plenty of bad practice code. This is especially true when combined with networking.

                      Comment

                      Working...
                      X