Announcement

Collapse
No announcement yet.

X.Org Server Hit By New Local Privilege Escalation Vulnerability

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #71
    Originally posted by pong View Post

    That depends on the context.
    How does one define "new code"? Something one just hacked and never did any static code analysis, code review, testing, etc. on? Well of course that new code is buggy.
    You can look at the commit date for it. I recall seeing a table that showed how old Linux security vulnerabilities were based on the first commit that introduced it and the commit that fixed it. It showed most vulnerabilities were only a few years old when fixed.

    Originally posted by pong View Post
    How good / high is the barrier of validation / verification in order to accept proposed new code into the production tree? If it is pretty high NOW vs. was pretty low and unstructured BEFORE "ah it compiles and seemed to work the one time I ran it" then, yeah, I'd think a new code base REQUIRING strict static code analysis and warning checks and unit tests etc. etc. could have a LOT better ratio of bugs per line of code than some legacy code base that had none.

    Also if the new code base adhered to principles like design by contract, orthogonality, encapsulation, range / validity verification pre / post conditions, etc. then it's probably quite likely one can write code that is highly probable to "do what it guarantees" correctly upon the first commit after test / review. Absent that then yeah the first invalid input may make the whole program output undefined behavior from then on.

    Enforcing type safety, memory safety, ownership safety etc. obviously just tools in the tool box to make it harder to even WRITE and BUILD code that is incorrect
    with the ultimate ideal being able to prove correctness and specification / contract compliance of a piece of code at which point you're probably just as likely to have
    processor bugs as compiler / code bugs since in normal circumstances one has some good rationale to believe things will work as intended other than due to some subtle problem not so obvious in the code itself.
    Doing development properly certainly helps, but you still have bugs that will be missed by all of your QA measures that users will hit. The general trend of older code being less bug prone is going to hold, especially after it has accumulated years of testing and usage for rarely occurring bugs to have been caught.

    I have been doing plenty of QA work in OpenZFS lately (with more than 100 commits since September in this area), and the code that is the most buggy is code that was added relatively recently to introduce new features, despite our QA efforts intended to catch bugs before code is committed. This is universally true in all projects. A project that is full of new code is bound to have plenty of bugs.
    Last edited by ryao; 07 February 2023, 11:02 AM.

    Comment


    • #72
      Originally posted by ryao View Post

      I do not recall asking a question.
      Sorry, I meant your original comment.
      I've fixed the comment.

      Comment


      • #73
        Originally posted by Sevard View Post
        Well, no. If you claim that rewrites have more bugs than "mature code" then you need to consider all examples. Not only there which prove your point. Otherwise it's just selecting data that prove your point and ignoring all data that say otherwise. I'd call this a lie.
        Asking for a list of rewrites that went well would be absurdly subjective and is not real data. There is plenty of software that people use that is full of bugs. The fact that rewrites will be more buggy does not necessarily mean that a rewrite will not go well, as long as the bugs do not really matter very much.

        The one doing cherry-picking here would be you. I have not only seen data on this from Linux (which is a big enough corpus that data from it can show general trends), but also experienced this reality firsthand as a developer. You do not like my description of reality, so you look for an excuse to call it a lie. However, people have been trying to write bug free software for decades and they have failed remarkably outside of a few formally verified cases. Software written without formal methods is full of bugs. That is a fact.
        Last edited by ryao; 07 February 2023, 10:59 AM.

        Comment


        • #74
          Originally posted by ryao View Post
          Asking for a list of rewrites that went well would be absurdly subjective and is not real data.
          Same for asking rewrites that went terribly wrong on stackoverflow...

          Originally posted by ryao View Post
          However, people have been trying to write bug free software for decades and they have failed remarkably outside of a few formally verified cases. Software writing without formal methods is full of bugs. That is a fact.
          Thanks but I already know this.

          Originally posted by ryao View Post
          He one doing cherry-picking here would be you. I have not only seen data on this, but also experienced this reality firsthand as a developer. You do not like my description of reality, so you look for an excuse to call it a lie.


          Not every software project is like filesystem, which is immensely complex, takes a lot of time to create new ones and test it out to make sure it's rock stable and doesn't eat your data.

          Another reason is that it's hard to test every corner case as there are infinitely many hardware combinations that could go wrong, and the firmware and hardware itself can gets wrong.

          For projects like fs, sure, rewrite is a bad idea and it's better to come up with a new fs with much better design and more features instead of rewriting existing one.

          But other projects, like bash, fish, coreutil, etc, can certainly rewrite without introducing new bugs by having a better design, taking advantage of other well-maintained libraries and have comprehensive testing to even reduce bugs compared to the old software.

          Comment


          • #75
            Originally posted by NobodyXu View Post

            Same for asking rewrites that went terribly wrong on stackoverflow...
            I never suggested such a thing. I suggested posting there to ask for the statistical data on old code being less bug prone than new code that I have seen, but cannot link. In specific, the data on Linux kernel security bugs by age would be a good thing to ask to get. That data is eye opening and I really suggest you ask on Stack Overflow for a link to it so you can see it yourself.

            Originally posted by NobodyXu View Post
            But other projects, like bash, fish, coreutil, etc, can certainly rewrite without introducing new bugs by having a better design, taking advantage of other well-maintained libraries and have comprehensive testing to even reduce bugs compared to the old software.
            Rewriting those utilities from scratch is a recipe for bugs. Try having your rewritten utilities rebuild a Gentoo install that has a few thousand installed packages. That would shake out plenty of bugs. However, it would take years to get to the same maturity as existing implementations. The paper that introduced fuzzing did it by quantitatively measuring the reliability of those tools, and it took years before the tools were all fixed to be reliable.

            As for fish, I am under the impression that it’s rust version was not a rewrite from scratch, but instead was a translation of the existing code, which is more likely to be free of bugs since plenty of knowledge is directly transferring to the new implementation by virtue of being a translation.
            Last edited by ryao; 07 February 2023, 11:18 AM.

            Comment


            • #76
              Originally posted by NobodyXu View Post

              I think the best way to prevent malware is to cut the internet connection.
              So you put the ZFS backup system entire off-line and only connects to/turns on it when needed.
              Also, since most malwares now comes with browser as you have mentioned, it's already significantly more secure without web browser.
              Guess what I'm doing
              Boots up, makes backup, does snapshot rotation, powers down.

              You could also use Qubes OS https://www.qubes-os.org/ which is probably as secure as you can get since it runs each application in its own xen vm.
              It also runs the network stack, storage stack, bluetooth in its our VM, so I believe it's the most secure one unless you use a dedicated computer for web browsing and another for other activities.
              That's a bit overkill for my needs, but I'll add it to my list. Appreciated. Thanks!

              Comment


              • #77
                Originally posted by ryao View Post
                Asking for a list of rewrites that went well would be absurdly subjective and is not real data. There is plenty of software that people use that is full of bugs. The fact that rewrites will be more buggy does not necessarily mean that a rewrite will not go well, as long as the bugs do not really matter very much.
                You cannot prove in any way that any software is bug free/is less buggy. That's just assumption. We still discover bugs that were introduced several years ago. Also some bugs introduced back then may be harmless now, but new code can create a way to exploit them. So no – we cannot look only at cases that went terribly bad.
                The one doing cherry-picking here would be you. I have not only seen data on this from Linux (which is a big enough corpus that data from it can show general trends), but also experienced this reality firsthand as a developer. You do not like my description of reality, so you look for an excuse to call it a lie. However, people have been trying to write bug free software for decades and they have failed remarkably outside of a few formally verified cases. Software written without formal methods is full of bugs. That is a fact.
                Nobody say that new code is bug free. Bugs are everywhere in new as well as in old code. Rewrite aren't more or less buggy by definition. It really depends.

                Comment


                • #78
                  Originally posted by ryao View Post
                  I never suggested such a thing. I suggested posting there to ask fir the statistical data on old code being less bug prone than new code that I have seen, but cannot link. In specific, the data on Linux kernel security bugs by age would be a good thing to ask to get. That data is eye opening and I really suggest you ask on Stack Overflow for a link to it so you can see it yourself.
                  There are significantly more CVEs in Linux because it keeps getting bigger and more complex.

                  4.x adds linux namespace, cgroup v1 and v2.
                  5.x add io-uring, pidfd, new clone3 syscall.
                  6.x adds more io-uring requests, more drivers, some refactor to memory subsystem, etc

                  It's hard to say that these CVEs are caused by the refactor and rewrite, when there are so many new features added and so many new drivers added.

                  Comment


                  • #79
                    Originally posted by Sevard View Post
                    Well, no. If you claim that rewrites have more bugs than "mature code" then you need to consider all examples. Not only ones which prove your point. Otherwise it's just selecting data that prove your point and ignoring all data that say otherwise. I'd call this a lie.
                    [edit]
                    And there are security flaws that are much older than few years.
                    Eg.
                    https://nvd.nist.gov/vuln/detail/CVE-2021-27365 – in kernel which was discovered after ~15 years.
                    https://nvd.nist.gov/vuln/detail/CVE-2021-4034 – in polkit discovere after ~12 years.
                    And there is much more such bugs. And probably much much more which are waiting to be discovered.
                    I mentioned the existence of ancient flaws in an earlier comment. The reality is that they are an extreme minority, and the scarcity of them reflects the reality that mature code is less buggy than new code.

                    Comment


                    • #80
                      Originally posted by NobodyXu View Post

                      There are significantly more CVEs in Linux because it keeps getting bigger and more complex.

                      4.x adds linux namespace, cgroup v1 and v2.
                      5.x add io-uring, pidfd, new clone3 syscall.
                      6.x adds more io-uring requests, more drivers, some refactor to memory subsystem, etc

                      It's hard to say that these CVEs are caused by the refactor and rewrite, when there are so many new features added and so many new drivers added.
                      You can identify the first commit in Linux that introduced the CVE and the commit that fixed it. The delta of the dates is the age of the flaw. You are admitting that new code is more buggy than old code when you say that they are because Linux is becoming bigger and more complex. The patches adding things are adding the vast majority of these new bugs. You could also have bad fixes cause bugs too, but those are a minority and those bugs would still have their origin in a new change.
                      Last edited by ryao; 07 February 2023, 11:26 AM.

                      Comment

                      Working...
                      X