Announcement

Collapse
No announcement yet.

Torvalds' Comments On Linux Scheduler Woes: "Pure Garbage"

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by Creak View Post
    Whether you like it or not, Linus is quite rude in his answer. No matter how right or wrong the Google dev is, Linus sets an example of how you will be received if you tell something on the mailing list that you believe is right. This doesn't set a friendly environment and doesn't encourage new comers to share their issues, even if theirs might be real ones.
    Believe me, the alternative doesn't work. If you try to be polite, supportive and constructive, no matter how serious the problem you're trying to bring up is, it's perceived as a minor issue and action is rarely taken. If you're "lucky" enough, you also get to fix it, because you're the one who brought it up and you were so nice about it.

    And if you don't believe me, do this exercise: read this thread and count how many posts are about what Linus said vs how he said it.

    Comment


    • #22
      It's great - his response is pretty much exactly what I said in the other thread.

      I would like one cookie please.

      Comment


      • #23
        I like the ".. particulary bad random number generator" part.

        Comment


        • #24
          As others have said, I'm almost certain most have not read all three of Linus' posts in that thread (one, two, and three), which is too bad, they were easy to read and understand because of how Linus laid it out. Aside from the first couple paragraphs, the tone was very informative and constructive. And quite frankly, Linus doesn't care about the garbage you guys spew from your arseholes. Hey, that actually felt pretty good. Maybe Linus is on to something.

          Comment


          • #25
            A bit off topic here ...

            The mutli cpu systems I have abused, locks are always evil and never behaved as planned.

            I always hated doing the mutli week investment to work around a lock less solution. But they have always worked faster in the end. The only extra cost was in the memory usage, but that you can just trow money at it.

            Comment


            • #26
              Originally posted by Creak View Post
              Whether you like it or not, Linus is quite rude in his answer. No matter how right or wrong the Google dev is, Linus sets an example of how you will be received if you tell something on the mailing list that you believe is right. This doesn't set a friendly environment and doesn't encourage new comers to share their issues, even if theirs might be real ones.
              Oh yes, because Linux kernel development is a Special Education School. That is, one for mentally disabled people.

              If a big company like Google isn't able to do proper Linux development, they should get better project managers and better developers. Seriously, do they hire code monkeys these days?

              Google is becoming pathetic...

              Comment


              • #27
                Originally posted by qlum View Post

                If you read the actual thread you will see he is being more productive in later posts. The point he was trying to make is that what this person was trying was fundamentally flawed.
                Absolutely. I did read them. But the point I'm trying to make is that a lot of these discussions often land in performance discussions.
                For which there is no common baseline nor anything relevant to start comparing from or have history from.
                For every new problem it always starts from the beginning.

                Comment


                • #28
                  Originally posted by kiffmet View Post
                  I have to say that the Linux scheduler definitely needs improvements though.
                  Sadly, the most promising alternative (MuQSS) needs a lot of work as well. For a while there we very peculiar issues when using 'schedutil' on AMD hardware when using MuQSS. And MuQSS' developer has no interest whatsoever in supporting kernel point releases, openly cares very little about fully supporting cgroups in MuQSS and so forth and so on.

                  Whether or not CFS needs work (which I do agree with, mind you)... any of the alternatives out there are simply not mature enough to replace it. And doing something like runtime replacements or alternatives such as is possible with the IO scheduler, well... that is going to be extremely complex to pull off. Since it would require building a baseline framework replacement for CFS. Something that other CPU schedulers can make use of.

                  No matter how you approach this issue, fixing the actual issues is far more complex than what this Google developer tried to do. Basically, what they did is demonstrating a lack of trust in the scheduler that does exist and trying to circumvent it somehow. Or, well, trying to be smart anyhow. That is the true issue, I feel. A lack of trust in the existing scheduler.

                  How many issues are CFS actually messing up vs the userspace code simply not trusting CFS and trying to be smart? Yet another question. See, so many variables here.

                  I'd be interested in metrics on that. Maybe that is the point where we should start if we want to address this issue more thoroughly. Gathering actual verifiable evidence of CFS messing up. Do you have specific workload examples where you can be 100% positive that it truly is CFS that is hampering performance? Someone in here able to gather metrics on this to share with the kernel devs?

                  Comment


                  • #29
                    Originally posted by menasaw683 View Post
                    Sadly, the most promising alternative (MuQSS) needs a lot of work as well. For a while there we very peculiar issues when using 'schedutil' on AMD hardware when using MuQSS. And MuQSS' developer has no interest whatsoever in supporting kernel point releases, openly cares very little about fully supporting cgroups in MuQSS and so forth and so on.
                    BMQ is a promising alternative worth mentioning as well. This YouTube video comparing CFS, BMQ, MuQSS, and PDS in games shows consistent improvement in FPS and frame times. So I can only speak on gaming and overall desktop interactivity, but BMQ (as well as CFS) has been working well for me with the low-latency kernel on my desktop.

                    edit: Been testing more BMQ and CFS, and noticed when the kernel is compiling in the background, BMQ exhibited noticeable lag/slowdowns while using the desktop, and CFS didn't. So I think I'll be sticking with mainline CFS indefinitely for the scheduler moving forward.
                    Last edited by perpetually high; 12 January 2020, 01:45 PM.

                    Comment


                    • #30
                      Originally posted by perpetually high View Post
                      As others have said, I'm almost certain most have not read all three of Linus' posts in that thread (one, two, and three), which is too bad, they were easy to read and understand because of how Linus laid it out. Aside from the first couple paragraphs, the tone was very informative and constructive. And quite frankly, Linus doesn't care about the garbage you guys spew from your arseholes. Hey, that actually felt pretty good. Maybe Linus is on to something.
                      oh yes... Like that manly feeling after getting the first pubic hairs I guess? Well the goal of working in a project is getting the project done and not feeling good by showing how educated you are. On the long run people will notice if you treat them with respect or not. If they sense a lack of respect they will not work more then necessary and they will not work with you they will work against you.
                      The ones of you exicted about havibg rude project leaders are usually the weak ones who get treated like this frequently. I think the psychological term is overcompensating afaik. It just shows that you dont get the full picture

                      Comment

                      Working...
                      X