Announcement

Collapse
No announcement yet.

Running Linux 5.15-rc1 Causing A New Slowdown... Here's A Look

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by perpetually high View Post

    edit 3: this morning after I saw this, I looked up Michael on YouTube and saw him on camera for the first time. I don't think he'll mind me sharing this:


    https://www.youtube.com/watch?v=bkpFRcDOv1A&t=41m0s
    You lie. From previous site photos, I'm pretty sure Michael Larabel is a beer stein.

    On topic, it looks at a glance that supporting Phoronix gets you wide-ranging software reporting. After a while you notice that you actually get better software as well. Support Phoronix Today!

    Comment


    • #22
      Originally posted by agd5f View Post

      I'm not discounting Michael's work, but there are lot of people that find regressions (performance or otherwise) and report the issues to bugzilla or the relevant developer, or even Linus himself. The issues are debugged and fixed. There are probably tons of times there were performance regressions that Michael never noticed because they just happened to be found and fixed between times when he tested. Michael tends to get a lot of publicity because he writes about them on his website. You can find lots of examples on LKML or other venues.
      Others may find performance regressions as well, but the question is are there any other people who are doing exactly what Michael does - comparing different Linux kernel versions with performance benchmarks to find performance regressions. At least I haven't heard of any. I also think that this way you can find the smallest regressions that can only be found by comparing different kernel versions.

      Comment


      • #23
        Originally posted by agd5f View Post

        Lots of people do this every day. Not everyone has a website to announce their findings, but users and developers find issues, report, debug, and fix them every day.
        Many entities that have a large reliance on Linux have internal tools that evaluate the kernel updates. Some companies write their own internal benchmarks using some of their own applications as well.

        Comment


        • #24
          Originally posted by user1 View Post

          Others may find performance regressions as well, but the question is are there any other people who are doing exactly what Michael does - comparing different Linux kernel versions with performance benchmarks to find performance regressions. At least I haven't heard of any. I also think that this way you can find the smallest regressions that can only be found by comparing different kernel versions.
          Not every kernel contributor has so many hardware available that they can put up different configuration and test it.

          Comment


          • #25
            Originally posted by user1 View Post

            Others may find performance regressions as well, but the question is are there any other people who are doing exactly what Michael does - comparing different Linux kernel versions with performance benchmarks to find performance regressions. At least I haven't heard of any. I also think that this way you can find the smallest regressions that can only be found by comparing different kernel versions.
            No question Mr. Larabel is unique, especially with his prolific journalism and a focus on Linux gaming and desktop users. One of the great features of how Linux is developed, is that anyone can just access the source, not just for Linus' releases, but out of tree patches and other developers trees, and test/examine them. And many entities do just that, even though many people will not be familiar with them. LWN had an interesting statistic about who were the most prolific bug reporters for the v5.12 kernel. 4 out of the top 5 were robots. The most prolific of which is the kernel test robot, run by Intel, which constantly runs tests, including performance regression tests between kernel versions. When it finds a problem, it bisects, then sends a report to LKML, and an email to whomever created the patch. So, most of this testing and reporting happens either internally, or out of general public view-- given the general impression people seem to have in these forums, as noted by agd5f, perhaps that is a marketing failure

            Regardless, Michael certainly provides a unique insight into how this sort of access and testing is possible, and reporting it in an engaging and understandable way for his audience.

            https://01.org/lkp
            https://lwn.net/Articles/853039/

            Comment


            • #26
              Latest - https://lore.kernel.org/lkml/CAHk-=w...ail.gmail.com/

              I am also now communicating with Shakeel to test his forthcoming proposed patch(es).
              Michael Larabel
              http://www.michaellarabel.com/

              Comment


              • #27
                For those unable to click on Michael's link above:

                From: Linus Torvalds <[email protected]>
                To: Shakeel Butt <[email protected]>,
                Marek Szyprowski <[email protected]>,
                Andrew Morton <[email protected]>,
                Feng Tang <[email protected]>,
                Michael Larabel <[email protected]>
                Cc: Linux MM <[email protected]>,
                Linux Kernel Mailing List <[email protected]>
                Subject: Re: memcg: infrastructure to flush memcg stats
                Date: Thu, 16 Sep 2021 13:44:44 -0700 [thread overview]
                Message-ID: <[email protected] l.gmail.com> (raw)
                In-Reply-To: <[email protected]>

                So the kernel test robot complained about this commit back when it was
                in the -mm tree:

                https://lore.kernel.org/all/20210726...OptiPlex-9020/

                but I never really saw anything else about it, and left it alone.

                However, now Michael Larabel (of phoronix) points to this commit too,
                and says it regresses several of his benchmarks too.


                Shakeel, are you looking at this? Based on previous experience,
                Michael is great at running benchmarks
                on patches that you come up
                with.

                Linus
                Linus is not a man known to tell lies. That's for sure.

                Comment


                • #28
                  Originally posted by remenic View Post
                  All that money is put into enterprise use cases.
                  TF do you think "enterprise use cases" are that they won't suffer far more from gimped MM performance than e.g. a desktop waiting for keypresses?

                  Even just a 2% performance hit translates to a LOT of extra $$$ on a DC electric bill alone, and far far more if it means you have to buy another 4000 servers to handle the spillover.

                  TLF should be using some of its income to provide a competently-managed CI+test environment. We were doing this shit *last century* at *startups* ffs - how are these clowns STILL bigger amateurs than we were back then?! (And why the hell is *Google*, of all people, apparently not capable of that either? Or maybe they're doing it, but in their own cloud... :P)

                  edit> Now that I get to the last page, I can see the build system DID actually complain, but the dev responsible for the code just ignored it. My apologies for doubting you, buildbot.
                  Last edited by arQon; 19 September 2021, 04:05 AM.

                  Comment

                  Working...
                  X