Announcement

Collapse
No announcement yet.

Linux 6.8 Scheduler Changes Include New EEVDF Fast Path, Additional Scheduler Tuning

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Linux 6.8 Scheduler Changes Include New EEVDF Fast Path, Additional Scheduler Tuning

    Phoronix: Linux 6.8 Scheduler Changes Include New EEVDF Fast Path, Additional Scheduler Tuning

    Ingo Molnar sent in all of the scheduler changes this morning for the now-open Linux 6.8 merge window...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    can anyone help deciphering this data? To me it seems the change makes the published benchmarks 3-20 times faster?

    Comment


    • #3
      Originally posted by varikonniemi View Post
      can anyone help deciphering this data? To me it seems the change makes the published benchmarks 3-20 times faster?
      You are confusing micro benchmarks with impact on real world performance for your workloads. You shouldn't read too much into it and assume that it translates 1:1. If you want to understand the data, you should run the macro benchmarks on workloads you care about.

      Comment


      • #4
        I know all of that, what i was asking if i read correctly how they described the results in that table. What are the other results? Slow and parity cannot be in the same league because they are on another order of magnitude.

        Comment


        • #5
          Originally posted by varikonniemi View Post
          I know all of that, what i was asking if i read correctly how they described the results in that table. What are the other results? Slow and parity cannot be in the same league because they are on another order of magnitude.
          Unless you read it the other way, with direct being the perfect option, fast being the 2nd and so on.

          Comment


          • #6
            Something I find interesting: governors like Conservative allow for setting clocks based on high priority tasks only. If you set that behavior while using these EEVDF changes, I'm wondering if the low priority tasks will subsequently cause the clockspeeds to become underset for the high priority tasks as well since they now need to wait on low priority.

            As a simple hypothetical example of each setting, lets assume a high priority and low priority task each cause the governor to decide it would take 1 GHz of additional speed to accomplish on time:
            A. Let's allow clockspeeds set on all tasks (default)
            1 GHz + 1 GHz = 2 GHz. This is sufficient to complete both tasks on time, even with sorting work for latency and starvation
            B. 1 GHz + 0 GHz (eliminated due to low priority) = 1 GHz. This speed is insufficient to complete either task on time, although the high priority task could be completed on time if the low priority task completely yields to the high priority one.
            And if it totally yields to the high priority task, the low priority one will eventually complete as long as some CPU is awake with some budget for it, and CPUs usually stay awake at some low speed as long as there are any tasks to complete, even under this governor configuration (AFAIK).

            I suspect with the EEVDF scheduler, under scenario B. above, the low priority task will not totally yield, so both tasks will not complete on time.

            I know that's a very niche configuration, just thinking out loud.

            Comment


            • #7
              Could be just me, but wasn't part of the point in changing that the old had become a unmaintainable mess of shortcuts and whatnots?

              Comment


              • #8
                Originally posted by geearf View Post

                Unless you read it the other way, with direct being the perfect option, fast being the 2nd and so on.
                No, direct is standard, fast is the new implementation. But what is parity and slow?

                Comment


                • #9
                  Originally posted by varikonniemi View Post
                  I know all of that, what i was asking if i read correctly how they described the results in that table. What are the other results? Slow and parity cannot be in the same league because they are on another order of magnitude.
                  Note that the values in each row add to 100—they are percentages.

                  What I think it means is that (taking netperf as an example) 24.18% of scheduling decisions are now made using the fast path, when previously it would be 0% (since the fast path didn't exist). Because fast is listed as the third column only before slow, that seems to indicate that previously the slow path was used in that case.

                  So it can be read as: "Previously 25.32% of scheduling decisions took the slow path, but now only 1.14% use the slow path, and so assuming that the fast path is faster than the slow path, that means there is a speedup."

                  If on the other hand the fast path stayed mostly unused and the slow path was still used almost as often as it was before, then it would not be an optimisation because the time spent checking if the fast path could be used would be longer than the time saved by actually using the fast path. (It would also be a problem if the change reduced the quality of scheduling decisions, but I cannot tell whether or how much that is impacted.)

                  Comment


                  • #10
                    Originally posted by archsway View Post

                    Note that the values in each row add to 100—they are percentages.

                    What I think it means is that (taking netperf as an example) 24.18% of scheduling decisions are now made using the fast path, when previously it would be 0% (since the fast path didn't exist). Because fast is listed as the third column only before slow, that seems to indicate that previously the slow path was used in that case.

                    So it can be read as: "Previously 25.32% of scheduling decisions took the slow path, but now only 1.14% use the slow path, and so assuming that the fast path is faster than the slow path, that means there is a speedup."

                    If on the other hand the fast path stayed mostly unused and the slow path was still used almost as often as it was before, then it would not be an optimisation because the time spent checking if the fast path could be used would be longer than the time saved by actually using the fast path. (It would also be a problem if the change reduced the quality of scheduling decisions, but I cannot tell whether or how much that is impacted.)
                    thanks, i think you are right.
                    Last edited by varikonniemi; 09 January 2024, 06:23 AM.

                    Comment

                    Working...
                    X