Originally posted by road hazard
View Post
Announcement
Collapse
No announcement yet.
The Disappointing Direction Of Linux Performance From 4.16 To 5.4 Kernels
Collapse
X
-
- Likes 2
-
Originally posted by Danny3 View PostI hate when developers care only about security and do the changes no matter the costs.
The culprits here are Intel and to a less extent AMD.
Do your homework and learn how to disable these mitigations if they bother you so much.
- Likes 4
Leave a comment:
-
If anyone is feeling they've been screwed by all these mitigations, you can watch Greg Kroah-Hartmans video regarding these (from OSSEU/ELCE 2019), where he apologizes to you all for stealing your CPU cycles: https://www.youtube.com/watch?v=fIwr_znLsec
Leave a comment:
-
Why is everyone so upset about mitigations anyway?
This isn't Windows where the whole system WILL REBOOT no matter what if it needs to update, mitigations are just a DEFAULT SETTING you can change if you don't want them.
- Likes 4
Leave a comment:
-
Originally posted by GrayShade View Post
These are timing side-channel attacks. They can allow you to extract sensitive data from different processes or threads provided you get code running on the machine and a reliable way to time it. It's not "ZOMG MICROCODE COMPROMISE", it's more of "leave this tab running in an outdated browser for one hour and it might steal your Facebook session cookie" by measuring tiny differences in the execution speed the code.
The cloud providers care the most about these mitigations because for them it's a real risk that a customer gets some private keys or other sensitive data stolen by someone else sharing the same server.
- Likes 4
Leave a comment:
-
Originally posted by jrch2k8 View PostMost of this attack are practically undetectable because they run inside the CPU with MORE PRIORITY(as execution ring) THAT THE KERNEL AND HYPERVISORS AND IN SOME CASES AT THE SAME LEVEL OF THE MICROCODE depending on your CPU manufacturer or silicon revision. This hardware flaws are so nasty you can even violate X86 execution rules, L1 cache integrity, SMM, NX bits, checksuming, etc.
The cloud providers care the most about these mitigations because for them it's a real risk that a customer gets some private keys or other sensitive data stolen by someone else sharing the same server.Last edited by GrayShade; 11 November 2019, 06:45 PM.
- Likes 3
Leave a comment:
-
Originally posted by birdie View Post1. Flash is dead. VB macros? Are you fucking high? Show me a working meltdown exploit written in any macros language please. VMs? Have I ever mentioned VMs? How many desktop users normally run VMs? 1%? Then why all the rest have to suffer? And why do VMs have to always run untrusted code by default (according to your peculiar logic)? This is reality only in your perverted imagination which sees exploits where none exist and none are even possible.
Your logic regarding real-world exploitation is flawed. The fact the there does not appear to have been a real attack based on Spectre is very likely the result of active development and deployment of effective countermeasures. Fixing a problem only after something bad happens is not the way to go, the guys at Boeing who designed the brilliant MCAS would probably have a few things to say about this.
Originally posted by birdie View Post2. Show me in the wild exploits/malware. Go on.
Originally posted by birdie View Post3. The benefit is that certain mission critical server applications can sometimes slow down by up to 40% (e.g. Redis/MySQL) due to these mitigations and that means companies need buy 40% more server equipment because they can't disable these mitigations on a per app basis. That's fucking money and wasted resources.
This idiocy/lunacy about blanket transient execution vulnerabilities mitigation has to stop.
Nothing is stopping your from switching the mitigations off, making it the default would, however, be grossly irresponsible.
- Likes 3
Leave a comment:
-
Originally posted by birdie View Post
1. Flash is dead. VB macros? Are you fucking high? Show me a working meltdown exploit written in any macros language please. VMs? Have I ever mentioned VMs? How many desktop users normally run VMs? 1%? Then why all the rest have to suffer? And why do VMs have to always run untrusted code by default (according to your peculiar logic)? This is reality only in your perverted imagination which sees exploits where none exist and none are even possible.
Also, show me in-the-wild exploits/malware which uses the said vulnerabilities. We've had then for almost two years now. There must be plenty of them, right? Why the fuck should we slow all our PCs down only because possible vulnerabilities exist? And trust me, if the NSA has to get into some network, these vulnerabilities will be the last to be even considered. We've had them for the past 20 months but none have been used in any major hacking attempt. Fucking none. Meanwhile all the PCs on this planet have recently slowed down by up to 80% just to feel on a safe side. Not to be because 100% of users/servers out there are getting hacked through social engineering and classic vulnerabilities like unsafe C/C++ code, default passwords, bad planning, etc. etc. etc.
2. Show me in the wild exploits/malware. Go on.
3. The benefit is that certain mission critical server applications can sometimes slow down by up to 40% (e.g. Redis/MySQL) due to these mitigations and that means companies need buy 40% more server equipment because they can't disable these mitigations on a per app basis. That's fucking money and wasted resources.
This idiocy/lunacy about blanket transient execution vulnerabilities mitigation has to stop.
Also, please understand this are hardware flaws so nasty that are barely detectable(that in most if not all cases don't or barely leave traces ) by hardcore experts that only few big companies can afford or pro white hack teams, is unreasonable to expect big flashy news like it happens with rootkits and other software base security problems and yes for business hardware is the least expensive path, if you had ever worked on Enterprise you would know that 90% of the time hardware is barely a cost compared to other parts of the business systems.
2.) https://foreshadowattack.eu/ just the first result of the first vulnerability acronym i remembered, is not that hard(there is even a youtube tutorial).
3.) the fact you imply mitigation should be applied per app proves definitely that the kernel's developers are completely right in enable all mitigation by default and ask the user to explicitly disable them.
Also as note:
Most of this attack are practically undetectable because they run inside the CPU with MORE PRIORITY(as execution ring) THAT THE KERNEL AND HYPERVISORS AND IN SOME CASES AT THE SAME LEVEL OF THE MICROCODE depending on your CPU manufacturer or silicon revision. This hardware flaws are so nasty you can even violate X86 execution rules, L1 cache integrity, SMM, NX bits, checksuming, etc.
Sure, i could agree that certain more mild flaws that could allow simple things like read from L3 caches on certain condition on affected software could be applied per application but the mitigation hit of those is negligible, the big hit come from the really nasty one like L1TF and Co.
- Likes 5
Leave a comment:
-
Linus once said jokingly that losing against Windows in a benchmark should be treated as a bug. Maybe it should be considered seriously!
- Likes 4
Leave a comment:
-
At least the Linux kernel continues picking up many new features as due to security mitigations and other factors the kernel performance continues trending lower.
I looked through the test results and they do indicate that some of these tests are showing performance degradation. I think the term used by Linux developers is "regression". I think there is something to investigate in a deeper and more thorough manner than this article does, even to the degree of bisecting the programming code in use during the test; every aspect of these tests should be looked at, nothing off limits.
Perhaps the most challenging page showed tests on an OS being run using a web browser; the tests shown on page 7. How do we eliminate the web browser as a contributor to the degradations/regressions? Without looking at the PTS code we cannot not since the article is vague on these details. Could it simply be case of the web browser being used has some sort of problem with these kernel versions, and not the other way around? For me there is no absolving any test component "just because".
Sorry, but I am not buying this conclusion given these test results. No sale here.
- Likes 2
Leave a comment:
Leave a comment: