Announcement

Collapse
No announcement yet.

New Linux Kernel Vulnerability Exploited

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by tomato View Post
    people inside novell and redhat do, they are paid to do it

    not to mention the uncountable independent contractors hired for managing code

    the code is being looked at
    Of course; just as Microsoft's people are paid to look at Windows, or Oracle's at Solaris. Sure, the code is there and everyone can debug it, but if only people being payed are looking at it, how does it differ from the situation in the closed-source model?

    Comment


    • #12
      It differs in that anyone, you, have the chance to look at it any time, should you so choose.

      Comment


      • #13
        Originally posted by Sergio View Post
        These kind of things... History has taugh us again and again that software is inherently buggy (insecure?); it is simply to many 'variables' that it is virtually imposible to escape this reality. It doesn't matter how much effort is put on design, it doesn't matter whether it is Linux, Windows, Solaris, BSD, MINIX, Plan 9, AIX, MULTICS, it doesn't matter if it is 'direct' or managed code...
        I think that shifting away from this (apparently) natural issue about software in general requires something radical and essentially new. I hope to be able to see such thing materialize.
        There already is this something radical and new like that. It's called Hardened Gentoo. Or PaX kernel sources, to be specific, which is what guards from buffer overflows like this one. That is, buffer overflows still happen due to poor code, but attackers can't use them, since they can't track where the code they want to be executed is located, as it's constantly randomised in memory. It does come at a cost of some overhead, though.

        Comment


        • #14
          Originally posted by GreatEmerald View Post
          There already is this something radical and new like that. It's called Hardened Gentoo. Or PaX kernel sources, to be specific, which is what guards from buffer overflows like this one. That is, buffer overflows still happen due to poor code, but attackers can't use them, since they can't track where the code they want to be executed is located, as it's constantly randomised in memory. It does come at a cost of some overhead, though.
          I see that it implements the non-executable bit at the page level; it emulates the functionality if the hardware doesn't support it. The feature SEGMEXEC looks interesting. It also offers ASLR and other things. Overall very interesting. Yet, I found this: "March 4, 2005: VMA Mirroring vulnerability announced, new versions of PaX and grsecurity released, all prior versions utilizing SEGMEXEC and RANDEXEC have a privilege escalation vulnerability".

          When I said that something radical was needed, I was thinking more on a complete shift in the way we create computer programs (just to put an example, functional instead of imperative systems programming).

          Comment


          • #15
            Also lets not forget the fact that closed code can't get audited by outsiders. Open code tends to have its flaws documented to much greater extremes than closed code does. Also turn around time for flaws in open code is much faster. Often time turn around for flaws on closed code can take years.
            Last edited by duby229; 15 May 2013, 06:56 PM.

            Comment


            • #16
              Originally posted by Sergio View Post
              When I said that something radical was needed, I was thinking more on a complete shift in the way we create computer programs (just to put an example, functional instead of imperative systems programming).
              You were right the first time. Software is inherently vulnerable. You can, however, reduce the "attack surface" by radically simplifying. The problems with display servers will decrease when we move to Wayland, for instance. As for memory issues, ASLR is a kludge. Look at Mozilla and Samsung's new Rust language, if you want a true un-managed solution.

              I would say the software development process itself is a big part of it, as it determines the rate at which fixes come out and the frequency of regressions. There's still a lot of people out there who don't use DVCS and couldn't write a unit test to save their life. Just figure out what Java is doing, and then do the opposite.

              A lot of it is also related to network security

              Other ideas:

              Micro-Kernels and modularity (stability and resilience, at the cost of overhead)
              P2P package repositories (eliminate vulnerable centralized servers, and speed downloads)
              P2P DNS servers (ditto)
              512-bit encryption, or higher (always going to be an arms race)
              Quantum networks (theoretical still)
              Static Analysis (should get better with advanced AI)

              Browsers are also phasing out plugins, as web standards improve. This will slow the spreading of malware, but stupid people are always going to be around with 30 toolbars installed, acting as hosts for every kind of botnet.
              Last edited by EmbraceUnity; 15 May 2013, 09:47 PM.

              Comment


              • #17
                Originally posted by Sergio View Post
                Of course; just as Microsoft's people are paid to look at Windows, or Oracle's at Solaris. Sure, the code is there and everyone can debug it, but if only people being payed are looking at it, how does it differ from the situation in the closed-source model?
                Because open source gives you the warm fuzzies.

                Comment


                • #18
                  Originally posted by Sergio View Post
                  Of course; just as Microsoft's people are paid to look at Windows, or Oracle's at Solaris. Sure, the code is there and everyone can debug it, but if only people being payed are looking at it, how does it differ from the situation in the closed-source model?
                  But they don't. They are on cooperate deadlines. They don't 'just poke around the code to see if they find anything'. Slashdot ran a story not too long ago, http://tech.slashdot.org/story/13/05...t-falls-behind about how a blogger describes how Windows developed internally. Yes there was an update later that tried to make it sound less harsh and more 'oh but I didn't mean it like that', but even then, just think about what he said, which I'm sure holds some form of truth, no matter what.

                  You can't just go around exploring things and trying to change things.

                  That's basically what they say. Partially the reason why they release bug solutions so late to the game, is not because they find it them selves I would guess, but they get told to search for them and only fix what is found.


                  In open source, EVERYBODY can look and poke. Everybody can submit a patch. People want to fix things to make things better. If you want to change something radically and do the work, it usually gets accepted (assuming it all makes sense etc).

                  Then there's academic review, teacher using open source as teaching tools. Someone might bump into something there.

                  And just hobbyists wanting to learn things and looking around.

                  All in all, its save to say, many eyes can't be bad

                  Comment


                  • #19
                    Originally posted by EmbraceUnity View Post
                    512-bit encryption, or higher (always going to be an arms race).
                    We already have 512-bit encryption (RC4 1684-bit, RC6 2040-bit, Threefish 1024-bit, HPC 16384-bit), but nobody uses good and long keys.
                    It wont be an arms race forever: http://en.wikipedia.org/wiki/Limits_to_computation

                    Comment


                    • #20
                      Originally posted by LightBit View Post
                      We already have 512-bit encryption (RC4 1684-bit, RC6 2040-bit, Threefish 1024-bit, HPC 16384-bit), but nobody uses good and long keys.
                      It wont be an arms race forever: http://en.wikipedia.org/wiki/Limits_to_computation
                      That, and the fact that the universe is finite. When we reach the maximum entropy, no more computations

                      Comment

                      Working...
                      X