Announcement

Collapse
No announcement yet.

The Disappointing Direction Of Linux Performance From 4.16 To 5.4 Kernels

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    Am I missing something? Thats like -10% at max... in benchmarks (usually very extreme and specific). Who the f* cares? Are you all running businesses depening on server-speed? A normal user wont feel any difference. It may suck for some very-low-end desktop-users. If you really refuse to upgrade your hardware, just disable migiations.

    Comment


    • #52
      Originally posted by birdie View Post
      On your desktop PC the only untrustworthy code which you usually run isJavaScript, period.
      or Flash applets, or VB macros in documents, or some stuff in VMs. Even data may contain active code that is necessary for the data do work properly. Perhaps it is a bad practice but that is the reality and we have to deal with it.
      Originally posted by birdie View Post
      What's the point of applying these protections to compilers? Video encoders/renderers? Word/Spreadsheets/Presentation applications? Graphics/Animation/CAD/etc? How any of these applications are able to run untrustworthy code in the first place?
      What about the numerous exploits such as Stagefright, Imagetragick, the Windows image viewer bug, that video file that could crash iOS, shall I continue? Data is code as well!

      Originally posted by birdie View Post
      No, that's not the case. Either you have all the protections enabled or all of them disabled and you cannot even enable/disable them in runtime, much less on a per app basis.
      What exactly would be the benefit of being able to toggle mitigations at runtime? I cannot think of anything besides performance testing. Per-application based logic with some black/whitelist may benefit some users but is that really that much of a sore spot? Applications that spend most of the time doing their stuff in userspace - which is what most users run on their desktops - are not that critically affected anyway.

      Comment


      • #53
        Should throw a BSD in there so we can predict when the 'Convergence' point is.
        That way I know when to jump ship.

        EDIT:
        Originally posted by kuco View Post
        Am I missing something? Thats like -10% at max... in benchmarks (usually very extreme and specific). Who the f* cares?
        If we continue to lose 10% every 12 months when will you start to care?
        Last edited by Templar82; 11 November 2019, 05:15 PM. Reason: Noticed a new reply

        Comment


        • #54
          Originally posted by birdie View Post

          You can't slow down something by more than 100%, lol. And even slowing down something by 100% means it ceases to function completely, lol.
          True, but I read that certain tests were slowed down by 180% in certain areas. I guess I'll change that..
          Last edited by r08z; 11 November 2019, 05:17 PM.

          Comment


          • #55
            Originally posted by MadCatX View Post
            ...
            1. Flash is dead. VB macros? Are you fucking high? Show me a working meltdown exploit written in any macros language please. VMs? Have I ever mentioned VMs? How many desktop users normally run VMs? 1%? Then why all the rest have to suffer? And why do VMs have to always run untrusted code by default (according to your peculiar logic)? This is reality only in your perverted imagination which sees exploits where none exist and none are even possible.

            Also, show me in-the-wild exploits/malware which uses the said vulnerabilities. We've had then for almost two years now. There must be plenty of them, right? Why the fuck should we slow all our PCs down only because possible vulnerabilities exist? And trust me, if the NSA has to get into some network, these vulnerabilities will be the last to be even considered. We've had them for the past 20 months but none have been used in any major hacking attempt. Fucking none. Meanwhile all the PCs on this planet have recently slowed down by up to 80% just to feel on a safe side. Not to be because 100% of users/servers out there are getting hacked through social engineering and classic vulnerabilities like unsafe C/C++ code, default passwords, bad planning, etc. etc. etc.

            2. Show me in the wild exploits/malware. Go on.

            3. The benefit is that certain mission critical server applications can sometimes slow down by up to 40% (e.g. Redis/MySQL) due to these mitigations and that means companies need buy ~67% more server equipment because they can't disable these mitigations on a per app basis. That's fucking money and wasted resources.

            This idiocy/lunacy about blanket transient execution vulnerabilities mitigation has to stop.
            Last edited by birdie; 12 November 2019, 03:35 PM.

            Comment


            • #56
              Does anyone know how to manually specify the GPU for Phoronix Test Suite? I haven't been able to use it for quite awhile because it only detects the GT 710 I use for my VM (which uses a vfio driver), but doesn't detect my R9 390 at all (using the Mesa amdgpu driver).

              Comment


              • #57
                Originally posted by ThoreauHD View Post
                Isn't 4.18 when CoC was inserted into the Linux kernel?
                Correlation does not imply causation.

                Comment


                • #58
                  Originally posted by gutigone View Post
                  5.14?! :P
                  Oh I hate those number typos so much!!!

                  Comment


                  • #59
                    At least the Linux kernel continues picking up many new features as due to security mitigations and other factors the kernel performance continues trending lower.
                    I consider this quote a symptom of "premature conclusive journalism", reaching a definitive decision without a solid basis in fact as to the cause.

                    I looked through the test results and they do indicate that some of these tests are showing performance degradation. I think the term used by Linux developers is "regression". I think there is something to investigate in a deeper and more thorough manner than this article does, even to the degree of bisecting the programming code in use during the test; every aspect of these tests should be looked at, nothing off limits.

                    Perhaps the most challenging page showed tests on an OS being run using a web browser; the tests shown on page 7. How do we eliminate the web browser as a contributor to the degradations/regressions? Without looking at the PTS code we cannot not since the article is vague on these details. Could it simply be case of the web browser being used has some sort of problem with these kernel versions, and not the other way around? For me there is no absolving any test component "just because".

                    Sorry, but I am not buying this conclusion given these test results. No sale here.

                    Comment


                    • #60
                      Linus once said jokingly that losing against Windows in a benchmark should be treated as a bug. Maybe it should be considered seriously!

                      Comment

                      Working...
                      X