Real problem is that Operating systems based on Wayland are not ready yet, but Fedora. The real problem is that linux development on desktop is a mess. This is de (the) aftermath.
Announcement
Collapse
No announcement yet.
X.Org Server Hit By New Local Privilege Escalation Vulnerability
Collapse
X
-
Originally posted by ryao View Post
The idea that new code has more bugs than mature code is well known. While I have seen charts showing fewer bugs found in old code versus bugs found in new code, I do not have any links on hand to provide. Just ask various experienced developers and you will hear the same from many more people than just me.
That said, any project to write a replacement for a mature codebase from scratch will have more bugs than its mature predecessor until it matures itself. That is a fact of life.
How does one define "new code"? Something one just hacked and never did any static code analysis, code review, testing, etc. on? Well of course that new code is buggy.
How good / high is the barrier of validation / verification in order to accept proposed new code into the production tree? If it is pretty high NOW vs. was pretty low and unstructured BEFORE "ah it compiles and seemed to work the one time I ran it" then, yeah, I'd think a new code base REQUIRING strict static code analysis and warning checks and unit tests etc. etc. could have a LOT better ratio of bugs per line of code than some legacy code base that had none.
Also if the new code base adhered to principles like design by contract, orthogonality, encapsulation, range / validity verification pre / post conditions, etc. then it's probably quite likely one can write code that is highly probable to "do what it guarantees" correctly upon the first commit after test / review. Absent that then yeah the first invalid input may make the whole program output undefined behavior from then on.
Enforcing type safety, memory safety, ownership safety etc. obviously just tools in the tool box to make it harder to even WRITE and BUILD code that is incorrect
with the ultimate ideal being able to prove correctness and specification / contract compliance of a piece of code at which point you're probably just as likely to have
processor bugs as compiler / code bugs since in normal circumstances one has some good rationale to believe things will work as intended other than due to some subtle problem not so obvious in the code itself.
- Likes 3
Comment
-
Originally posted by NobodyXu View PostMy experience with programming is that while rewrite is hard, a well-done rewrite will often simplify the code and often spot the bugs in the old implementation, that is given you do have a test suite.
That happens in the Linux kernel all the time, and it often involves changing the API, is there any proof that the old Linux kernel has significantly less bug than the new one due to these rewrites?
An example is that there are many CVEs in the 4.x due to linux namespace being a new feature and they messed it up to the point of it's not suitable for secure sandbox.
And then there comes wasm, which is an interpreter with capability based syscalls (wasm component), meaning everything is sandboxed by default and you cannot open anything unless you have the capability.
It is significantly more secure than Linux namespace that many cloud providers have adopted it and it can actually run multiple wasm instances from different clients without using virtual machine.
While it's not exactly a rewrite, they are similar technology with overlapping use case and wasm is invented after linux namespace and docker and is strictly more complex than linux namespace, yet they get it right and Linux kernel has failed this.
And the statistics is only meaningful with context and details, otherwise it could be just made up of all the failed rewrite attempts.
Comment
-
Originally posted by Sevard View PostWell, if you post a question to stack overflow about rewrites that went well, then you'll also get many examples. This proves nothing.
Actually actual experience of single developer also proves nothing. I've seen some code rewrites that had more bugs and some that had less bugs. I've also seen some projects that had not fixable design bugs and complete rewrite was the only way to move forward.
Asking for a list of rewrites that went well would be beside the point.
Comment
-
Originally posted by Weasel View PostI think he was talking about this: https://www.joelonsoftware.com/2000/...ver-do-part-i/Last edited by NobodyXu; 07 February 2023, 10:51 AM.
Comment
-
Originally posted by mdedetrich View Post
This kind of reasoning is fallacious anyways even in the few areas where unsafe needs to be used, everywhere else you do not have to worry about these problems which already by definition reduces the scope for security issues massively, which is much better than the status quo in C/C++
Originally posted by mdedetrich View PostFor starters this is all based on assumptions. You can only claim these things scientifically if you have a control which in this case would have to be keeping the language the same and in our case we are not arguing that. While it is true that recoding things has a potential to introduce new logic bugs (although Rust even can check against those, specifically concurrent logic bugs), arguing what the ratios are is pure speculation.
Originally posted by mdedetrich View PostYeah using stack overflow to answer the question "do rewrites produce more bugs" is inaccurate to the level of a facepalm
- Likes 1
Comment
-
Originally posted by binarybanana View Post
Most GPUs also support an overlay plane which Xorg exposes through xv. This is mostly used for video but it's interesting because it can be synchronized to the monitor refresh independently of the main framebuffer (like the mouse cursor).
Anyway, if Wayland works so should Xorg with the modesetting driver.
Comment
-
Originally posted by ryao View Post
Post a question asking for data showing that new code is more buggy than old code. Posting to ask for Linux kernel data showing that security bugs are only a few years old would work.
Asking for a list of rewrites that went well would be beside the point.
[edit]
And there are security flaws that are much older than few years.
Eg.
https://nvd.nist.gov/vuln/detail/CVE-2021-27365 – in kernel which was discovered after ~15 years.
https://nvd.nist.gov/vuln/detail/CVE-2021-4034 – in polkit discovere after ~12 years.
And there is much more such bugs. And probably much much more which are waiting to be discovered.Last edited by Sevard; 07 February 2023, 10:59 AM.
- Likes 2
Comment
Comment