Announcement

Collapse
No announcement yet.

Is The Linux Kernel Scheduler Worse Than People Realize?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Hi-Angel View Post
    Would be nice if the kernel maintainers saw the article, so that the problem would be known. Would it be worth to write to an IRC, or some mailing list? Or even report a bug? What anyone think about it? Should say, I didn't yet read the paper.
    LKML is more literate than you think. Possibly CS profs as well.

    Comment


    • #12
      I think we should clear what we're talking about; there are a lot of things wich sound similar.
      The very scheduler for the Kernel / processes and the I/O scheduler aka elevator (for data carriers; which is also a place of pain sometimes). And then we also have (modern) energy management which probably also gets its fingers in the matter by elevating or lowering frequencies, power gating parts of APUs, pushing / spreading work from core to core to prevent hotspots, or inducing sleep modes. Maybe then also IPC has an influence on more complex tasks. And who know what else might influence the perfomance that finally reaches layer 8 at the screen - which is what we perceive.
      I guess we need a full grown kernel developer to take all of this apart and explain things in a clear way.
      Stop TCPA, stupid software patents and corrupt politicians!

      Comment


      • #13
        Originally posted by duby229 View Post

        And yet, NoOp outperforms other schedulers on almost every single load. There are plenty of indications that you are wrong.

        You are confusing the I/O scheduler (elevator) with the process scheduler.

        Comment


        • #14
          Yeah. It's broken. It has been broken for a long time too. The last time I remember the scheduler behaving as expected was sometime around early 2.6.x, then it went downhill, pretty fast. Could have been influenced by a lot of other additions though.

          Comment


          • #15
            Originally posted by macemoneta View Post


            You are confusing the I/O scheduler (elevator) with the process scheduler.
            I guess the differentiation between io and process is less clear in my mind than it is in yours. And I suppose that is why I see a problem.

            Comment


            • #16
              The fixes commented on the slide were committed?

              Glad that the scheduler code is open source, so issues like this could be investigated

              Comment


              • #17
                This is not the post you are looking for. Move along...
                Last edited by Dick Palmer; 16 April 2016, 01:38 PM.

                Comment


                • #18
                  The paper does indeed claim a 137x speedup for LU. But it's not very common to run loads like these without pinning. (Not saying anything against this work! It's just to make clear you won't get that speedup on your own HPC code just by reading that paper ;-) )

                  Comment


                  • #19
                    Originally posted by Dick Palmer View Post
                    Anyone know if this (from the slides @ ~19) is correct?..



                    ~137% improvement? A 137x improvement seems.. well.. unlikely?
                    It is 137x (from 2196 to 16 sec, table 3 of the ref. paper). lu is a Gauss-Seidel solver, part of the NPB (I guess not something you would normally worry about in a desktop).
                    Last edited by norsetto; 16 April 2016, 01:44 PM. Reason: Table numer was wrong

                    Comment


                    • #20
                      OlafLostViking, the point is exactly that the reason why you have to do the pinning is because the scheduler is doing a bad job. Once the problems are fixed, the pinning should be unnecessary.

                      Comment

                      Working...
                      X