Announcement

Collapse
No announcement yet.

The Disappointing Direction Of Linux Performance From 4.16 To 5.4 Kernels

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    Originally posted by birdie View Post

    1. Flash is dead. VB macros? Are you fucking high? Show me a working meltdown exploit written in any macros language please. VMs? Have I ever mentioned VMs? How many desktop users normally run VMs? 1%? Then why all the rest have to suffer? And why do VMs have to always run untrusted code by default (according to your peculiar logic)? This is reality only in your perverted imagination which sees exploits where none exist and none are even possible.

    Also, show me in-the-wild exploits/malware which uses the said vulnerabilities. We've had then for almost two years now. There must be plenty of them, right? Why the fuck should we slow all our PCs down only because possible vulnerabilities exist? And trust me, if the NSA has to get into some network, these vulnerabilities will be the last to be even considered. We've had them for the past 20 months but none have been used in any major hacking attempt. Fucking none. Meanwhile all the PCs on this planet have recently slowed down by up to 80% just to feel on a safe side. Not to be because 100% of users/servers out there are getting hacked through social engineering and classic vulnerabilities like unsafe C/C++ code, default passwords, bad planning, etc. etc. etc.

    2. Show me in the wild exploits/malware. Go on.

    3. The benefit is that certain mission critical server applications can sometimes slow down by up to 40% (e.g. Redis/MySQL) due to these mitigations and that means companies need buy 40% more server equipment because they can't disable these mitigations on a per app basis. That's fucking money and wasted resources.

    This idiocy/lunacy about blanket transient execution vulnerabilities mitigation has to stop.
    1.) you are ranting about 2 different things, here your problem is against security vulnerabilities in either the kernel or user space due to bad software or mildly defective hardware. this has nothing to do with mitigations.

    Also, please understand this are hardware flaws so nasty that are barely detectable(that in most if not all cases don't or barely leave traces ) by hardcore experts that only few big companies can afford or pro white hack teams, is unreasonable to expect big flashy news like it happens with rootkits and other software base security problems and yes for business hardware is the least expensive path, if you had ever worked on Enterprise you would know that 90% of the time hardware is barely a cost compared to other parts of the business systems.

    2.) https://foreshadowattack.eu/ just the first result of the first vulnerability acronym i remembered, is not that hard(there is even a youtube tutorial).

    3.) the fact you imply mitigation should be applied per app proves definitely that the kernel's developers are completely right in enable all mitigation by default and ask the user to explicitly disable them.

    Also as note:

    Most of this attack are practically undetectable because they run inside the CPU with MORE PRIORITY(as execution ring) THAT THE KERNEL AND HYPERVISORS AND IN SOME CASES AT THE SAME LEVEL OF THE MICROCODE depending on your CPU manufacturer or silicon revision. This hardware flaws are so nasty you can even violate X86 execution rules, L1 cache integrity, SMM, NX bits, checksuming, etc.

    Sure, i could agree that certain more mild flaws that could allow simple things like read from L3 caches on certain condition on affected software could be applied per application but the mitigation hit of those is negligible, the big hit come from the really nasty one like L1TF and Co.

    Comment


    • #62
      Originally posted by birdie View Post
      1. Flash is dead. VB macros? Are you fucking high? Show me a working meltdown exploit written in any macros language please. VMs? Have I ever mentioned VMs? How many desktop users normally run VMs? 1%? Then why all the rest have to suffer? And why do VMs have to always run untrusted code by default (according to your peculiar logic)? This is reality only in your perverted imagination which sees exploits where none exist and none are even possible.
      Flash will be dead only when Google stops shipping the plugin with Chrome by default and when Adobe pulls it from its website. Neither has happened so far. To make matters worse, the sites that still use Flash nowadays are usually those that provide questionable content from dubious sources. VB is a turing-complete fully featured programming language just like C or JS. If you can write a Spectre exploit in JS, there is no reason to assume that it cannot be done in VB. Using a VM to run code that is untrusted in one way or another is a very common use case, I don't understand what confuses you about this.

      Your logic regarding real-world exploitation is flawed. The fact the there does not appear to have been a real attack based on Spectre is very likely the result of active development and deployment of effective countermeasures. Fixing a problem only after something bad happens is not the way to go, the guys at Boeing who designed the brilliant MCAS would probably have a few things to say about this.

      Originally posted by birdie View Post
      2. Show me in the wild exploits/malware. Go on.
      For what? Stagefright and Imagetragick were being actively exploited before the patches were available.

      Originally posted by birdie View Post
      3. The benefit is that certain mission critical server applications can sometimes slow down by up to 40% (e.g. Redis/MySQL) due to these mitigations and that means companies need buy 40% more server equipment because they can't disable these mitigations on a per app basis. That's fucking money and wasted resources.

      This idiocy/lunacy about blanket transient execution vulnerabilities mitigation has to stop.
      If you are a business that actually has to care about hardware costs you probably won't be running Apache and MySQL on one machine in the first place. There is also the question whether a per-app mitigation would actually be secure. Also, I don't really understand what you're getting at. That the mitigation mechanisms aren't exactly perfect? Well, duh. Security issues caused by speculative execution are still rather new for the industry so it shouldn't surprise you that most developers went with "security first, optimization later" strategy.

      Nothing is stopping your from switching the mitigations off, making it the default would, however, be grossly irresponsible.

      Comment


      • #63
        Originally posted by jrch2k8 View Post
        Most of this attack are practically undetectable because they run inside the CPU with MORE PRIORITY(as execution ring) THAT THE KERNEL AND HYPERVISORS AND IN SOME CASES AT THE SAME LEVEL OF THE MICROCODE depending on your CPU manufacturer or silicon revision. This hardware flaws are so nasty you can even violate X86 execution rules, L1 cache integrity, SMM, NX bits, checksuming, etc.
        These are timing side-channel attacks. They can allow you to extract sensitive data from different processes or threads provided you get code running on the machine and a reliable way to time it. It's not "ZOMG MICROCODE COMPROMISE", it's more of "leave this tab running in an outdated browser for one hour and it might steal your Facebook session cookie" by measuring tiny differences in the execution speed the code.

        The cloud providers care the most about these mitigations because for them it's a real risk that a customer gets some private keys or other sensitive data stolen by someone else sharing the same server.
        Last edited by GrayShade; 11-11-2019, 06:45 PM.

        Comment


        • #64
          Originally posted by GrayShade View Post

          These are timing side-channel attacks. They can allow you to extract sensitive data from different processes or threads provided you get code running on the machine and a reliable way to time it. It's not "ZOMG MICROCODE COMPROMISE", it's more of "leave this tab running in an outdated browser for one hour and it might steal your Facebook session cookie" by measuring tiny differences in the execution speed the code.

          The cloud providers care the most about these mitigations because for them it's a real risk that a customer gets some private keys or other sensitive data stolen by someone else sharing the same server.
          i agree about this for the small ones, which are mostly side channels and high level cache snoopers but the real dangerous ones like specter v4+ and L1TF go way down, in fact, there are some papers and youtube videos where the guy got so low that actually found undocumented opcodes outside the x86 specs and actually accessed parts of the microcode and did real bad ass stuff with those opcodes and you can find few other where they demostrate phantom processes running alongside the kernel even modifying protected vm execution on realtime while showing the entire OS and Hypervisor are totally unaware with several OSes, i also found some papers on how to attack certain nics to even cover your attack making packets invisible even to sniffers and coincidentally those are the mitigation that really draw a hit on performance, the side channels are mostly very small cost fixes tho but not all

          Comment


          • #65
            Originally posted by hax0r View Post
            Performance degradations isn't due all because of CPU mitigation patches. There's no CPU scheduler tailored for desktop users (one that would allow low latency and high throughput aimed at 2~8 cpu core configurations). There's no decent filesystem for gamers, the list goes on. There's no Cl in Linux, developers don't test their changes, and when they do, they happen to benchmark inadequately. This is why you had SGI workstations from 1994 that actually were much more practical than computers today, X11 and OpenGL was better off in 90s, IRIX was such a bliss and everything just worked.
            Pretty much all of this is statement is false. Just for shits and giggles, I will address the SGI aspect. I own an Indy 2 Impact 10000. The feeblest netbook from even several years ago would make it look like a joke, the desktop would be soundly mocked on this forum, and the thing sucks over [email protected] idle. (Engage its 3D graphics card, which takes up *3* bus slots, turn on the boat-anchor 21" trinatron monitor, and hammer the 10k rpm scsi disks, and it will not only drown out nearby jets, but make your power bill cry Also, IRIX sucks... but it is fun for retro-computing.

            Comment


            • #66
              Why is everyone so upset about mitigations anyway?
              This isn't Windows where the whole system WILL REBOOT no matter what if it needs to update, mitigations are just a DEFAULT SETTING you can change if you don't want them.

              Comment


              • #67
                If anyone is feeling they've been screwed by all these mitigations, you can watch Greg Kroah-Hartmans video regarding these (from OSSEU/ELCE 2019), where he apologizes to you all for stealing your CPU cycles: https://www.youtube.com/watch?v=fIwr_znLsec

                Comment


                • #68
                  Originally posted by Danny3 View Post
                  I hate when developers care only about security and do the changes no matter the costs.
                  Software affected includes air traffic control, water treatment, electric grid, banking, commerce, shipping. But fuck it, let's leave everything vulnerable because "you hate when developers care only about security no matter the costs".

                  The culprits here are Intel and to a less extent AMD.

                  Do your homework and learn how to disable these mitigations if they bother you so much.

                  Comment


                  • #69
                    Originally posted by road hazard View Post

                    Maybe because it's deserved? Unlike Intel, AMD didn't cut corners with their CPU design. All those years where benchmarks showed Intel CPU's dominating AMD..... now it all makes sense how they did it......... because they cut corners and security took a back seat to performance/profits.

                    Yes, AMD was impacted by some of these CPU vulnerabilities but not to the extent that Intel was. I'll never buy Intel CPUs again (because of them cutting corners) nor will I ever buy another Nvidia GPU (because of their hatred for Linux).

                    On a side note, I agree with others....... these tests should have included AMD hardware. Is there a reason why AMD wasn't tested?
                    Do you really think nvidia hates linux these days? All of the super computers that their GPUs are in are running linux.

                    Comment


                    • #70
                      Originally posted by Azrael5 View Post
                      So what the fuck is happened from the 4.16 to 5.5 kernel? Linux is doing all is possible to be the worst among the operating system platforms.
                      Don't rant: read and understand. The slowdown is largely due to hardware security mitigations, which you can turn off.

                      Comment

                      Working...
                      X