Announcement

Collapse
No announcement yet.

Linux 5.13 Lands Support For Randomizing Stack Offsets Per Syscall

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by marios View Post
    The patch is not about a specific vulnerability. It is a generic "If there is a vulnerability an attacker might have to work harder to exploit it".
    "but still doable" is a very open end thing.

    Attacker having to work harder to exploit a known exploit could be to point that if they tried for the complete life time they might only have 1 chance that it works.

    There are documented examples where the old PAX RANDKSTACK predecessor to this current work made particular attacks history totally impractical for attacker to pull off with odds of success that low that those attacks were not worth trying against a Linux kernel with PAX RANDSTACK. Yes does not make the attack not doable but can at times reduce the old to the point the attack will be unlike to work once in your life time with the randomisation compared to 100 percent success without it.

    Attacker might have to work harder. The thing you did not consider is how much harder that is going to be. The randomisation can make it for all practical usages make the unknown exploit impossible but still technically possible just due to how hard the randomisation makes it.

    Remember you have this mixed with other features like kfence in the Linux kernel that can make errors on the stack be fatal as well.

    This randomising stack offsets this does not make doing exploits depending on stack a little harder instead makes performing those attacks massively harder to the point that the failure rate(as in the number of attempts at the exploit you have to do to get a successful exploit) is that bad majority of this kind of attack comes unpractical on systems with this feature turned on.

    There is a key feature to this protection do notice that is randomizing stack offsets per syscall. This means you prior attempts don't provide any useful data to work around the randomisation. Yes with kfence also enabled in the kernel a screwed up stack attack caused by the randomizing of stack could have been system fatal so kicking you all the way back out.

    Attacker does not have unlimited resources or time to perform attacks on stack weaknesses and avoid detection. If they attempt brute forcing what the alteration of randomizing stack offsets does it will make their presence more noticed.

    marios ideal world you want to fix security flaws. But stack randomisation does render a class of attacks totally impractical to be used even if you never know the exact flaws. This is like the 99.9999999999999% uptime equal of a security fix yes the 15 9 is based on probability of success from what PAX RANDKSTACK showed.there is no reason to believe this change will be any different 0.0000000000001% chance of attacker having this class of exploit successfully work. This is not just a work harder you will get there level of difficulty change. From 100% always works to 0.0000000000001% is a massive change its not 100 percent fixed security fault but at that low of chance of working the flaws basically not usable to attackers.

    The hard reality here is you could have a massively security flawed OS on paper but if the chance of success is low enough it comes basically impossible for an attacker to exploit in their life time even if you never fix the flaws.

    Lot OS hardening theory is not about making a OS secure but make os have low in chance of exploits working that exploits are basically useless to attackers due to how low the odds of success are.

    Comment


    • #12
      Originally posted by marios View Post
      Taking a 1% performance hit in order to make it harder (but still doable) for an attacker to exploit a vulnerability (that already exists)... Looks like beyond stupid to me...
      Virtual memory costs also performance, we should go back to the days of dos and the first macs.

      Comment


      • #13
        Originally posted by jacob View Post

        Memory protection, access control, firewalling etc. all have a much higher performance impact.
        Originally posted by dibal View Post

        Virtual memory costs also performance, we should go back to the days of dos and the first macs.
        The above examples guarantee something. Virtual memory for example guarantees that a user-space program cannot harm the OS or another user-space program. Th hardening stuff we are talking about guarantee nothing, they just reduce the chances of something bad happening.

        PS. I didn't expect to get so many "1% is not that bad" responses. I think that 1% is too optimistic but I didn't spend time saying "it can't be that low" and assumed it is true. Since benchmarks will follow, a better estimation of the performance hit will be available.

        Comment


        • #14
          Originally posted by Uncle H.

          You're out of your depth. Do you even code?
          There is no point in answering that. If you can't get the point of the patches and my comments, you won't understand anything...

          Comment


          • #15
            Originally posted by marios View Post
            The above examples guarantee something. Virtual memory for example guarantees that a user-space program cannot harm the OS or another user-space program. Th hardening stuff we are talking about guarantee nothing, they just reduce the chances of something bad happening.
            No virtual memory does not say that a user-space program cannot harm the OS. Virtual memory exists under dos.


            Memory page protections and ring levels give the user-space program cannot harm OS. Light weight secure embedded CPU don't have a MMU instead have what is called a memory protection unit(MPU) that does not have the means to-do virtual memory but still have memory page protection and ring levels.

            Basically virtual memory technically from security point of view guarantee nothing for security unless of course you consider the possibility of Address space randomisation that is in the same class as this stack randomisation.

            Virtual memory that feature provides means to run applications larger than the ram you have and the ability to randomise application location in memory using different forms of address randomisation if you wish.

            Basically marios you did not get what dibal said at all. Virtual memory has over head and really has no 100 percent security advantage in having virtual memory. Yes it common to think virtual memory and memory protection is the same thing because some MMU you cannot to on the page protections without enabling virtual memory as well. In reality virtual memory and memory protections from security point of view are two different things.

            Comment


            • #16
              Originally posted by Uncle H.
              You're out of your depth. Do you even code?
              You should ask yourself that question since he's completely right.

              Comment


              • #17
                Originally posted by dibal View Post
                Virtual memory costs also performance, we should go back to the days of dos and the first macs.
                Virtual memory is a useful feature.

                This isn't. It doesn't provide any usability benefits. It is not a security patch or fix either, it is security through obscurity.

                Comment


                • #18
                  Originally posted by Weasel View Post
                  This isn't. It doesn't provide any usability benefits. It is not a security patch or fix either, it is security through obscurity.
                  This is not exactly true about no usability benefits. The feature is in the same camp as fuzz testing that by randomising the syscall stacks you make errors more likely to show themselves for developers.

                  Security though obscurity is only part of this. Think about some of the early bugs Address space layout randomisation caused to come delectable by developers.

                  This kind of change makes quite a bit harder for attackers but it also does have ability to make hidden flaws more viable. Alteration that makes faulty code error prone makes attackers life harder and also help to make particular faults more displayed to developers. Does this mean this feature is suitable for everyone computer most likely no. But there are cases where this feature or randomising stuff is useful. Yes its double sided being security by obscurity on one side and a form of fuzz testing to find particular defects in code on another.

                  Comment


                  • #19
                    Originally posted by Uncle H.
                    Same goes for you. I can see a couple of rank amateurs from a mile away, regardless of whether or not he was (superficially) right.
                    You must have a really big mirror then since you can see yourself from so far away.

                    Not only were you wrong, you still continue to embarrass yourself. Go back to node js that's where kid developers play. Let the low level stuff be dealt with by actual C programmers.

                    Comment


                    • #20
                      Originally posted by Weasel View Post
                      You must have a really big mirror then since you can see yourself from so far away.

                      Not only were you wrong, you still continue to embarrass yourself. Go back to node js that's where kid developers play. Let the low level stuff be dealt with by actual C programmers.
                      No you need to go look in the mirror and read the first commit.
                      https://www.phoronix.com/forums/foru...53#post1253853

                      The reality is you can look the PAX implementation of the same feature done slightly differently and see how much harder it made real exploits of the time. Yes PAX/grsecurity the reality is at times it makes particular exploits insanely hard to pull off that do turn up as CVE issues. Yes this result in exploits that would work against a non PAX/grsecruity system back then being non viable on PAX/grsecruity systems.

                      Thinking we are talking basically the same technology in the end result the produced result should be expected to be the same.

                      Security by obscurity does have its functional place. Think about it a combination lock on a safe is really security by obscurity by your define if this feature is. The combination lock objective is to make the process to break in slower not absolute protection. You could also call limiting number of logins per min security by obscurity by the define you are using as well Weasel.

                      Randomisation to create obscurity is a combination lock class security in the real world.

                      Security by obscurity comes in many different levels. This feature is not the security by obscurity of hiding the door key to your house in the garden level of item that a simple one time observe and you have the problem solved that level of security by obscurity is generally pointless.

                      Remember read that first post he states that a 1% performance trade off for this modification is not worth it when in reality for how much harder it can make it to pull of particular class of exploits that 1% can absolutely be worth it to some users.

                      Comment

                      Working...
                      X