Announcement

Collapse
No announcement yet.

One Of The Reasons Why Linux 5.5 Can Be Running Slower

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by Danny3 View Post
    I wonder what is the Linux Foundation doing with all those money they get from the companies ???
    No need to wonder when it's on their site.

    Comment


    • #32
      Originally posted by Hans Bull View Post
      Since I can't see any reaction yet on lkml
      To quote Linus: "To absolutely nobody's surprise, last week was very quiet indeed". A fair number of the usual suspects were likely off on holiday. I would not be surprised if it may be a bit longer until the author of the commit can review and understand the issues and decide to revert or revise.

      Comment


      • #33
        Nice catch!

        Comment


        • #34
          Originally posted by Zan Lynx View Post

          I haven't measured, but I have not noticed much of a slowdown in kernel compiles on my systems. Do you maybe use a small setting for sysctl vm.dirty_bytes or dirty_ratio? When I do a kernel compile almost all of the output files fit into cache, without needing to do a tmpfs build.

          I'm running a new build now just to see. It's an 8 core 1700X with 32 GB RAM and a btrfs disk array. It's a NAS mostly. It does report that PSI for IO has generally 2 processes stuck on IO (full IO stall). So it is getting a slowdown. But I'm not convinced it'd be worth putting the 20 GB of a complete build directory into tmpfs.
          My point was, it doesn't need to be significantly faster, only a bit faster. I don't compile kernels that often so not really an expert here. Did a quick test: On my system, NVMe SSD extracts linux 5.4.6.tar.xz in 7.4 seconds, in 7.3 seconds when using tmpfs. A 2.25% speedup. Erasing the kernel tree is 3 times as fast using tmpfs. When building the defconfig kernel, tmpfs build was couple of seconds faster. After this exercise, a fstrim run took few seconds. So, most of the speedup comes from faster filesystem metadata processing. I can't think of any reason not to use tmpfs. I mean it's not like you couldn't store the results of the compilation anywhere. You don't need to intermediate .o files and other compile time artifacts anywhere.

          Comment


          • #35
            Originally posted by Michael View Post

            Sadly, most do not, at least for showing support to make future tests possible.

            Greater than 50% ad block rates.
            Less than 1% being premium subscribers.
            Less than 1% tipping.
            I am a premium subscriber and it is money well spent for this kind of investigating and reporting. I might be due to renew soon as well.

            I do not use an ad blocker, but I do use DuckDuckGo Privacy Essentials and between that and the tracker blockers in Firefox, a number of sites I visit do think that I am using an ad blocker. I wonder if tracker blockers are influencing your block rates.

            Comment


            • #36
              Thanks for your great work Michael. I went premium a few months ago, because of your long-lasting commitment and passion to do a real quality testing and comparisons. Please keep up the good work : ) Best wishes and a Happy New Year!

              PS. Quite often I find myself surfing without being logged in to the Forums, so I suppose the statistics are probably biased as many other people may also block trackers / ads / use nano defender etc., while not being logged in and quickly reading the Linux hardware reviews, performance and open-source benchmarks on Phoronix.

              Comment


              • #37
                Wonderful work Michael. Send a Paypal tip your way last night
                Last edited by Random_Jerk; 30 December 2019, 11:15 AM.

                Comment


                • #38
                  Originally posted by Danny3 View Post
                  It's unbelievable how these awful regressions pass unobserved until Michael finds them...
                  I think it's clear that the Linux kernel who has millions of users has no quality assurance whatsoever.
                  I wonder what is the Linux Foundation doing with all those money they get from the companies ???

                  They should be pouring something like 5000 $ / month to Michael for doing this wonderful job of finding this terrible regressions and raising the alarm.
                  If it weren't for Michael we would have a much lower quality and less performant kernel.
                  To be honest there are dozens Fortune 500 companies whose businesses basically run of top of Open Source and Linux in particular and it kinda sucks that most of them don't really invest back in open source by doing proper QA/QC for the projects which desperately need it and could benefit immensely from it.

                  Google comes to mind as a company whose entire business is based on various open Source projects: all their servers run Linux and open source, Android includes dozens of open source components, etc. etc. etc.

                  Amazon AWS is based on Linux and I won't be surprise if their entire server infrastructure is Linux based.

                  Intel uses the Linux kernel to develop new chips, including CPUs, GPUs, FPGAs, etc.

                  Comment


                  • #39
                    Originally posted by baka0815 View Post
                    FireBurn, birdie as in the past, this is Michals bug report.
                    Err no it isn't, writing an article on a news site is now how you report a kernel bug

                    The instuctions are here: https://www.kernel.org/doc/html/late...ting-bugs.html

                    Comment


                    • #40
                      Originally posted by Michael View Post

                      Sadly, most do not, at least for showing support to make future tests possible.

                      Greater than 50% ad block rates.
                      Less than 1% being premium subscribers.
                      Less than 1% tipping.
                      Site owners must take some responsibility for the fact that people feel compelled to block ads. As was noted downthread -- quite appropriately on a post about performance regression -- advertising as practiced currently greatly increases data volume and slows down browsers, not to mention that privacy goes out the window. I'd add to that that there's enough malware embedded in the opaque JavaScript that ads add to the risks of online browsing.

                      One real privacy issue that gets glossed over is that interest-based ads targeted at the user's profile, rather than at the page contents, could prove embarrassing if someone else happens to be watching one's screen, intentionally or otherwise. If I'm browsing a site like Phoronix (just for example -- I'm a premium subscriber) and get ads for computer systems, cloud provider services, electronic test equipment, and the like, and those ads are static images or text, reasonably sized, clearly labeled as to what's being advertised and by whom, and no JavaScript, that's fine. It's still targeted in that if I'm viewing this kind of site I'm reasonably likely to be interested, but it's based on the content of the page, not some profile that has been built up that may or may not even be accurate. This is what DuckDuckGo does.

                      Site owners can't hide behind third party ad brokers in this. They're allowing those third parties to display this kind of content. They could choose not to, and insist on the kind of advertising I described above.

                      Comment

                      Working...
                      X