Announcement

Collapse
No announcement yet.

Running Linux 5.15-rc1 Causing A New Slowdown... Here's A Look

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • arQon
    replied
    Originally posted by remenic View Post
    All that money is put into enterprise use cases.
    TF do you think "enterprise use cases" are that they won't suffer far more from gimped MM performance than e.g. a desktop waiting for keypresses?

    Even just a 2% performance hit translates to a LOT of extra $$$ on a DC electric bill alone, and far far more if it means you have to buy another 4000 servers to handle the spillover.

    TLF should be using some of its income to provide a competently-managed CI+test environment. We were doing this shit *last century* at *startups* ffs - how are these clowns STILL bigger amateurs than we were back then?! (And why the hell is *Google*, of all people, apparently not capable of that either? Or maybe they're doing it, but in their own cloud... :P)

    edit> Now that I get to the last page, I can see the build system DID actually complain, but the dev responsible for the code just ignored it. My apologies for doubting you, buildbot.
    Last edited by arQon; 19 September 2021, 04:05 AM.

    Leave a comment:


  • perpetually high
    replied
    For those unable to click on Michael's link above:

    From: Linus Torvalds <[email protected]>
    To: Shakeel Butt <[email protected]>,
    Marek Szyprowski <[email protected]>,
    Andrew Morton <[email protected]>,
    Feng Tang <[email protected]>,
    Michael Larabel <[email protected]>
    Cc: Linux MM <[email protected]>,
    Linux Kernel Mailing List <[email protected]>
    Subject: Re: memcg: infrastructure to flush memcg stats
    Date: Thu, 16 Sep 2021 13:44:44 -0700 [thread overview]
    Message-ID: <CAHk-=wgcPrGW1=A9XetuUZv_QHf1p7znUUGbm7UCcawbboxRCQ@mai l.gmail.com> (raw)
    In-Reply-To: <[email protected]>

    So the kernel test robot complained about this commit back when it was
    in the -mm tree:

    https://lore.kernel.org/all/20210726...OptiPlex-9020/

    but I never really saw anything else about it, and left it alone.

    However, now Michael Larabel (of phoronix) points to this commit too,
    and says it regresses several of his benchmarks too.


    Shakeel, are you looking at this? Based on previous experience,
    Michael is great at running benchmarks
    on patches that you come up
    with.

    Linus
    Linus is not a man known to tell lies. That's for sure.

    Leave a comment:


  • Michael
    replied
    Latest - https://lore.kernel.org/lkml/CAHk-=w...ail.gmail.com/

    I am also now communicating with Shakeel to test his forthcoming proposed patch(es).

    Leave a comment:


  • NobodyXu
    replied
    Originally posted by user1 View Post

    Others may find performance regressions as well, but the question is are there any other people who are doing exactly what Michael does - comparing different Linux kernel versions with performance benchmarks to find performance regressions. At least I haven't heard of any. I also think that this way you can find the smallest regressions that can only be found by comparing different kernel versions.
    Not every kernel contributor has so many hardware available that they can put up different configuration and test it.

    Leave a comment:


  • edwaleni
    replied
    Originally posted by agd5f View Post

    Lots of people do this every day. Not everyone has a website to announce their findings, but users and developers find issues, report, debug, and fix them every day.
    Many entities that have a large reliance on Linux have internal tools that evaluate the kernel updates. Some companies write their own internal benchmarks using some of their own applications as well.

    Leave a comment:


  • user1
    replied
    Originally posted by agd5f View Post

    I'm not discounting Michael's work, but there are lot of people that find regressions (performance or otherwise) and report the issues to bugzilla or the relevant developer, or even Linus himself. The issues are debugged and fixed. There are probably tons of times there were performance regressions that Michael never noticed because they just happened to be found and fixed between times when he tested. Michael tends to get a lot of publicity because he writes about them on his website. You can find lots of examples on LKML or other venues.
    Others may find performance regressions as well, but the question is are there any other people who are doing exactly what Michael does - comparing different Linux kernel versions with performance benchmarks to find performance regressions. At least I haven't heard of any. I also think that this way you can find the smallest regressions that can only be found by comparing different kernel versions.

    Leave a comment:


  • Teggs
    replied
    Originally posted by perpetually high View Post

    edit 3: this morning after I saw this, I looked up Michael on YouTube and saw him on camera for the first time. I don't think he'll mind me sharing this:


    https://www.youtube.com/watch?v=bkpFRcDOv1A&t=41m0s
    You lie. From previous site photos, I'm pretty sure Michael Larabel is a beer stein.

    On topic, it looks at a glance that supporting Phoronix gets you wide-ranging software reporting. After a while you notice that you actually get better software as well. Support Phoronix Today!

    Leave a comment:


  • agd5f
    replied
    Originally posted by perpetually high View Post

    I've been frequenting for about 4 years now, and I haven't seen many do what he does. And the Google searches will back it up. He catches a lot of these things *at the perfect* time (while it's still in -rc and before it rears its ugly head), or even sometimes after the fact if there's a clear regression in a new kernel after a patch. That amd temp patch was one thing, but there's been so many regressions he's caught. He's invaluable because he's the watchdog of the Linux kernel performance. Think about it, he mentioned it to Linus. That's boss. He deserves a direct line.

    I'm just saying, not disagreeing with you, but also think Michael is underrated even after everyone's words in this thread. But that's just my personal opinion. And I know you feel similar, but spade a spade, I think Michael is the sole one doing *exactly* what he does.
    I'm not discounting Michael's work, but there are lot of people that find regressions (performance or otherwise) and report the issues to bugzilla or the relevant developer, or even Linus himself. The issues are debugged and fixed. There are probably tons of times there were performance regressions that Michael never noticed because they just happened to be found and fixed between times when he tested. Michael tends to get a lot of publicity because he writes about them on his website. You can find lots of examples on LKML or other venues.

    Leave a comment:


  • F.Ultra
    replied
    Originally posted by Danny3 View Post

    I'm really wondering W.T.F. are those guys doing with the huge amounts of money they get from Google, Facebook, Microsoft and all other members ???
    I don't see them working to improve the desktop adoption, governments open source adoption or open standards adoption like Vulkan.

    Or funding Michael as a full time employee with all the benefits for his amazing work at finding all these regressions and saving millions for all these funding members that use Linux.

    I'm really disgusted to see this bullshit corrupt foundation that does nothing !
    It's very easy to lookup if you are really interested. Here are their 2020 report https://www.linuxfoundation.org/wp-c...ort_120520.pdf

    Leave a comment:


  • perpetually high
    replied
    I'm just saying, Michael was just following Linus's law:

    In software development, Linus's law is the assertion that "given enough eyeballs, all bugs are shallow".

    The law was formulated by Eric S. Raymond in his essay and book The Cathedral and the Bazaar (1999), and was named in honor of Linus Torvalds.[1][2]

    A more formal statement is: "Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone." Presenting the code to multiple developers with the purpose of reaching consensus about its acceptance is a simple form of software reviewing. Researchers and practitioners have repeatedly shown the effectiveness of reviewing processes in finding bugs and security issues.[3]
    Doesn't sound like he broke any rules. Tell the authorities they don't need to come. All is okay. Linus is on top of it.

    Leave a comment:

Working...
X