Announcement

Collapse
No announcement yet.

X.Org Server Hit By New Local Privilege Escalation Vulnerability

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by ryao View Post

    New code tends to be less secure than mature code, so replacing it is a recipe for more security issues.
    you gain some you loose some,

    Comment


    • #32
      Originally posted by ryao View Post

      The idea that new code has more bugs than mature code is well known. While I have seen charts showing fewer bugs found in old code versus bugs found in new code, I do not have any links on hand to provide. Just ask various experienced developers and you will hear the same from many more people than just me.

      That said, any project to write a replacement for a mature codebase from scratch will have more bugs than its mature predecessor until it matures itself. That is a fact of life.
      40 years ago they had to make sure everything was working properly. It was a major issue back then to get anything fixed afterwards.

      These days developers roll out half-backed software all the time and rely on pushing fixes over the internet afterwards.

      Comment


      • #33
        Originally posted by jacob View Post

        They shouldn't rewrite it in anything, they should let it die.
        It would be irresponsible not to fix something many people use. People aren't simply going to stop using X11 because it is insecure. Hardly anyone on the planet has time to care for software security. just look at how entire countries are using unpatched old copies of Windows 10.
        People don't simply interrupt their workflows to adopt more secure technologies. If it weren't for older computers having obsolete hardware or simply getting damaged, half the planet would still be using Windows XP today.
        Last edited by ClosedSource; 07 February 2023, 05:28 AM.

        Comment


        • #34
          Originally posted by TemplarGR View Post

          No, this is not actually a "law" or "rule. Yes, code needs review, testing, bug fixing, but it is not like it is a law of nature that old code has less bugs than new code. If a new project written from scratch uses better practices, made by better people, better organized, with better tools and languages, on better hardware, etc etc, it can have fewer bugs even when brand new. You can't know for sure these things. And old code, no matter how mature, doesn't mean it is polished just because bugs aren't being reported. X11 is full of holes that were inside the code for ages, they just got discovered (or disclosed) now. Same with hardware bugs like Meltdown, they existed for ages until someone noticed them and disclosed them to the public, before that, everyone thought those old "mature" cpus were without such bugs....
          Anyone who writes code will make mistakes in whatever they write. In fact, they keep making the same mistakes. You do not need to hear that from me. John Carmack has a write up where he explicitly states that developers keep making the same mistakes:

          This is an mirror of a post from John Carmack. Recently I learned that his articles on #AltDevBlog are no longer acessible. So, in order to archive them, I am re-posting them here. These articles are definitely good reads and worth to be preserved. The most important thing I have

          Comment


          • #35
            Originally posted by Berniyh View Post
            I'm sure that's true in many cases, but as a general rule, I don't think you can claim that. It really depends on a lot of circumstances.
            e.g. you might be able to avoid a big amount of bugs by starting with an improved design. You might do more unit testing. Or use a language that reduces the number of bugs by design.

            Also, you might fall into the trap of assuming there are fewer bugs because nobody is looking for them.
            Take KDE as an example. KDE 3.5 is pretty mature, right? Even at the time, it was considered relatively low on bugs and quite stable. So now that it has "matured" as Trinity, it should be very low on bugs?
            Well, wrong. A couple of years ago a KDE dev looked into known bugs of KDE Plasma and KDE applications and checked whether Trinity is affected and as it turned out this was the case numerous times. There just wasn't anybody looking for these, since nobody, apart from a few stubborn people, is using the thing.

            With X11, this is only partly the case. Of course there are many many users out there running X11. But on the other hand, there aren't really that many developers working on it, so less people studying the code.
            It is a general rule. A project that avoids this would be abnormal. Such a unicorn likely does not exist unless formal verification is involved, but that would be very much abnormal, since production software does not use formal verification.

            If you were to do a break down of security fixes to Linux by the age of the vulnerability, you would find that after the first few years, the number of vulnerabilities found drops precipitously. Of course, you often hear about ancient bugs being found in Linux in the news, but the reality is that there are very few of these, because security issues in mature code are rare. I have seen a chart showing this and it is unfortunate that I cannot find a link.

            For every issue found in mature code, there will be many more bugs found in new code.
            Last edited by ryao; 07 February 2023, 05:37 AM.

            Comment


            • #36
              Originally posted by ryao View Post

              New code tends to be less secure than mature code, so replacing it is a recipe for more security issues.
              Well if we are talking about these kinds of security vulnerabilities and rewriting it in Rust specifically, such a vulnerability wouldn't be possible in Rust (Rust fails to compile on use after free errors).

              That being said, I don't think any sane person would think its wise to rewrite X.org in Rust but its for other reasons.



              Originally posted by ryao View Post

              The idea that new code has more bugs than mature code is well known. While I have seen charts showing fewer bugs found in old code versus bugs found in new code, I do not have any links on hand to provide. Just ask various experienced developers and you will hear the same from many more people than just me.

              That said, any project to write a replacement for a mature codebase from scratch will have more bugs than its mature predecessor until it matures itself. That is a fact of life.
              While this is true, as with any study/analysis the hypothesis/requirements need to be taken into account. Most of this analysis is not rewriting the software in a safer language (i.e. Rust), instead its about rewriting it in the same typical popular but unsafe languages that most people are used to. Point here is that Rust is very unique in this regard, its one of the only mainstream languages that actually prevents an entire suite of security issues.

              And there have been studies on this, and it does show that Rust does significantly reduce such types of errors. This is why even for non trivial software (i.e. Firefox), people have been rewriting parts of it exclusively in Rust.



              Originally posted by ryao View Post

              It is a general rule. A project that avoids this would be abnormal. Such a unicorn likely does not exist unless formal verification is involved, but that would be very much abnormal, since production software does not use formal verification.
              This is not entirely true. Type systems which is how Rust implements its compile time checks for memory access (specifically its a linear type system) has been formally verified to be correct, and type systems are equivalent to mathematical proofs.

              Now of course that doesn't mean that Rust can prove everything, thats not even possible (see godel's incompletness theorem), but what it does mean is that you can prove that certain parts of the program will uphold a certain property. So if we are talking about use after free errors, which is a subset of memory management related errors then Rust can absolutely prove that assuming you don't use unsafe, that these specifics errors won't happen.

              And studies have shown that such memory management related errors account for ~70% of the related security issues found in modern software, especially software written in C/C++ (which Xorg is a typical example of).

              Last edited by mdedetrich; 07 February 2023, 05:55 AM.

              Comment


              • #37
                CVE-2023-0494 entails local privilege elevation on systems where the X.Org Server is privileged and remote code execution is supported for SSH X forwarding sessions. Thankfully for many modern X.Org Server environments these days, the X.Org Server is no longer run as root / elevated privileges but for older systems and in other select configurations unfortunately remains running in such a vulnerable configuration.
                So basically it is still probably safer to run than your average web browser...

                For those who have SSH X forwarding enabled... well there isn't much alternative anyway. They have VNC or Pipewire. Neither are the same.
                Last edited by kpedersen; 07 February 2023, 05:44 AM.

                Comment


                • #38
                  Originally posted by WannaBeOCer View Post
                  Didn’t wayland have a similar vulnerability last year?

                  https://nvd.nist.gov/vuln/detail/CVE...#range-8384822

                  I agree X.org should be replaced but not until it reaches feature parity with X.org. Last I recall color management was just introduced about 3-4 months ago with Weston 11.0 and still has a long way before it becomes stable for production use for content creators or researchers.

                  Wayland seems like a good choice for gamers but gamers are a small portion of Linux desktop users. Programs that require a functional display server will just keep adding warnings in their programs. If anything lazy admins will probably move their users to Windows/macOS. If they are forced to change to a display server lacking features.

                  https://github.com/Psychtoolbox-3/Ps...box-3/pull/765
                  Wayland will never reach feature parity with X11 simply because that's not its goal, like MacOSX didn't have feature parity with MacOS Classic and X itself didn't have any feature parity with NeWS. Wayland is designed for the workflows and use cases that those who develop it deem sensible, but they never claimed that the entire legacy X-based ecosystem would be supported or even that it would be possible to reproduce it.

                  Comment


                  • #39
                    Originally posted by ClosedSource View Post
                    It would be irresponsible not to fix something many people use. People aren't simply going to stop using X11 because it is insecure.
                    It's kind of their problem. The truth is that nobody will spend time and resources on project that nobody uses. And users and devs are switching to Wayland. You cannot force X11 devs to fix all issues. But since it's OSS – you can fix them on your own.
                    Hardly anyone on the planet has time to care for software security. just look at how entire countries are using unpatched old copies of Windows 10.
                    People don't simply interrupt their workflows to adopt more secure technologies. If it weren't for older computers having obsolete hardware or simply getting damaged, half the planet would still be using Windows XP today.
                    Yeah, this is the issue. But this doesn't really matter. If people use old, insecure software then they wouldn't update X11 anyway. So in this case it doesn't matter if sb fixes remaining issues or not. And some of X11 issues are really hard (impossible?) to fix.

                    Comment


                    • #40
                      Originally posted by ryao View Post

                      Anyone who writes code will make mistakes in whatever they write. In fact, they keep making the same mistakes. You do not need to hear that from me. John Carmack has a write up where he explicitly states that developers keep making the same mistakes:

                      This is an mirror of a post from John Carmack. Recently I learned that his articles on #AltDevBlog are no longer acessible. So, in order to archive them, I am re-posting them here. These articles are definitely good reads and worth to be preserved. The most important thing I have
                      Oh yeah, John Carmack, the most overrated developer in the gaming industry.... Was involved in Doom 30 years ago, then somehow his word is gospel even decades after he stopped being relevant or even coding himself (and left after the disgrace that was Rage proved he had lost his touch)... Still has faithful religious zealots that take his words as gospel it seems...

                      No, good coders do not easily make mistakes. And modern tools are better for writing bugless code than older tools. Yes, mistakes do happen, but believing that professional software engineers are chimpanzees incapable of learning from their mistakes, is only something that hack, Carmack, and his gaming fanbois, would say and defend.

                      In fact, newer code tends to be safer, more stable, and more optimized than older code in my experience. Back in the day no one was experienced in parallel programming and multithreaded software was garbage, for example. These days you can see that people have gained experience in managing more threads, and the software frameworks/libraries they use are better as well.

                      Comment

                      Working...
                      X