Originally posted by scottishduck
View Post
Announcement
Collapse
No announcement yet.
X.Org Server Hit By New Local Privilege Escalation Vulnerability
Collapse
X
-
- Likes 3
-
Originally posted by ryao View PostRust is not 100% immune to memory issues, since it needs unsafe Rust to work in many cases and unsafe Rust is well known to suffer from memory issues. For example:
Also, the research shows bugs found over all crates downloaded from crates.io, not one single crate.
Originally posted by ryao View PostFor completeness, I should add that compiler bugs can also result in memory safety issues.
Originally posted by ryao View PostHowever, even if there really were no memory safety issues, other classes of bugs would occur in enough abundance to make replacements more buggy than the original software. Let us say that a rewrite from scratch will have 10x the bugs (which is likely a conservative estimate). Then even if you eliminate 70% via memory safety, you still have 3x the bugs. This is why you have various industry leaders that encourage Rust adoption suggesting that Rust be used only for new projects rather than calling for complete rewrites in Rust.
Where does the number like 10x comes from?
What is the project we are talking about?
How do they plan the rewrite?
Without details, you could be comparing apples to oranges.
There are a lot of examples out there where a rewrite actually benefits, like Google rewriting their own low-level android drivers, cloudflare rewriting their nginx + lua solution using async rust.
Comment
-
Originally posted by evert_mouw View PostEasy. I'll go with the vulnerabilities on my personal computer. No other users. I care more about basic features. Sure, malware could escalate to root permissions, but I care more about my /home than about the /usr/bin anyway. If malware gets on my system, it already got access to my most important files. Sure, root access would mean malware could become persistent on reboot, that is not a good thing.
It can simply install itself to $HOME/.bin and adds some code to your .profile/.bashrc/.bashprofile and then it will start next time you uses your shell.
It can also add symlinks to itself into $HOME/.bin, e.g. if it adds $HOME/.bin/ls to itself and then $HOME/.bin is in your $PATH before /usr, which is usually the case, next time you run ls it will actually run the malware.
Even if X server is not run with root permission, it's still possible to run as root given that a lot of people enables passwordless sudo.
All it takes is one "sudo ..." and your computer is 100% compromised.
Originally posted by evert_mouw View PostBut one can fight malware on different levels (firewalls, antivirus software, browser blocklists) while on the other hand, limiting your feature-set is literally limiting.
I'm not aware any good antivirus software on Linux.
For blocklists, unless you use a whitelist, otherwise it's only effective against known bad actors.
For starter, you can use firejail which uses linux namespace to sandbox applications.
If you don't mind some performance penalty, you can go for gVisor or Firecracker which uses a small VMM to reduce attack surface by emulating the linux syscalls themselves instead of pass it to Linux directly.
- Likes 3
Comment
-
Originally posted by NobodyXu View Post
Excuse me, but do you have anything backing what you say?
Where does the number like 10x comes from?
.Last edited by ryao; 07 February 2023, 08:27 AM.
- Likes 1
Comment
-
Originally posted by NobodyXu View PostMalwares don't even need root to persist. [...]
Firewalls generally don't help much in blocking malwares unless you are running a server and use it to prevent attackers from bruteforce your ssh password/key or trying to trigger CVEs in your network stack, or preventing DDoS which is probably its most important use case.
I'm not aware any good antivirus software on Linux.
For blocklists, unless you use a whitelist, otherwise it's only effective against known bad actors.
For starter, you can use firejail which uses linux namespace to sandbox applications.
If you don't mind some performance penalty, you can go for gVisor or Firecracker which uses a small VMM to reduce attack surface by emulating the linux syscalls themselves instead of pass it to Linux directly.
In my experience, most malware comes through the browser. I'll review your suggestions; thanks!
Comment
-
Originally posted by ryao View Post
Actual experience. If you post a question to stack overflow asking for references for what I posted, you will likely get a number of them. Much of what I have written can be backed up by statistics. It is unfortunate that I do not have links to that data to give you, but the guys at Stack Overflow should have them.
That happens in the Linux kernel all the time, and it often involves changing the API, is there any proof that the old Linux kernel has significantly less bug than the new one due to these rewrites?
An example is that there are many CVEs in the 4.x due to linux namespace being a new feature and they messed it up to the point of it's not suitable for secure sandbox.
And then there comes wasm, which is an interpreter with capability based syscalls (wasm component), meaning everything is sandboxed by default and you cannot open anything unless you have the capability.
It is significantly more secure than Linux namespace that many cloud providers have adopted it and it can actually run multiple wasm instances from different clients without using virtual machine.
While it's not exactly a rewrite, they are similar technology with overlapping use case and wasm is invented after linux namespace and docker and is strictly more complex than linux namespace, yet they get it right and Linux kernel has failed this.
And the statistics is only meaningful with context and details, otherwise it could be just made up of all the failed rewrite attempts.
Comment
-
So you put the ZFS backup system entire off-line and only connects to/turns on it when needed.
Also, since most malwares now comes with browser as you have mentioned, it's already significantly more secure without web browser.
Originally posted by evert_mouw View PostIn my experience, most malware comes through the browser. I'll review your suggestions; thanks!
It also runs the network stack, storage stack, bluetooth in its our VM, so I believe it's the most secure one unless you use a dedicated computer for web browsing and another for other activities.
- Likes 1
Comment
-
Originally posted by ryao View PostActual experience. If you post a question to stack overflow asking for references for what I posted, you will likely get a number of them. Much of what I have written can be backed up by statistics (and I have seen data showing it in the past). It is unfortunate that I do not have links to that data to give you, but the guys at Stack Overflow should have them.
Actually actual experience of single developer also proves nothing. I've seen some code rewrites that had more bugs and some that had less bugs. I've also seen some projects that had not fixable design bugs and complete rewrite was the only way to move forward.
- Likes 3
Comment
-
Originally posted by ryao View Post
Rust is not 100% immune to memory issues, since it needs unsafe Rust to work in many cases and unsafe Rust is well known to suffer from memory issues. For example:
This kind of reasoning is fallacious anyways even in the few areas where unsafe needs to be used, everywhere else you do not have to worry about these problems which already by definition reduces the scope for security issues massively, which is much better than the status quo in C/C++
Originally posted by ryao View PostFor completeness, I should add that compiler bugs can also result in memory safety issues.
Originally posted by ryao View PostHowever, even if there really were no memory safety issues, other classes of bugs would occur in enough abundance to make replacements more buggy than the original software. Let us say that a rewrite from scratch will have 10x the bugs (which is likely a conservative estimate). Then even if you eliminate 70% via memory safety, you still have 3x the bugs. This is why you have various industry leaders that encourage Rust adoption suggesting that Rust be used only for new projects rather than calling for complete rewrites in Rust.
What we do know is that rewriting existing C/C++ software in Rust is providing tangible benefits, especially when it comes to security. Its also bringing performance improvements, not because Rust is intrinsically faster (although there is some untapped future potential for Rust being faster here) but because Rust provides fearless concurrency, i.e. due to how easy it is to shoot yourself in the foot out of fear a lot of C programs were written in either a single threaded manner or implemented concurrency very trivially in a low hanging fruit manner (i.e. on a process level which brings multiplexing overhead). Due to Rust statically checking these issues, programs can be re-written from scratch with concurrency in mind without having to worry about decades long debugging of concurrency problems (which also differ between architectures making things worse i.e. ARM's memory semantics are weaker than x86's).
Originally posted by Sevard View PostWell, if you post a question to stack overflow about rewrites that went well, then you'll also get many examples. This proves nothing.
Actually actual experience of single developer also proves nothing. I've seen some code rewrites that had more bugs and some that had less bugs. I've also seen some projects that had not fixable design bugs and complete rewrite was the only way to move forward.Last edited by mdedetrich; 07 February 2023, 09:42 AM.
- Likes 3
Comment
Comment