Announcement

Collapse
No announcement yet.

Some Distributions Are Already Making Changes To Linux's Scheduler

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by peppercats View Post
    there's no reason for desktop distros to not use BFS by default, it's all around superior.
    It would be interesting to see how it fares in those HPC workloads with a lot of cores compared to CFS. Nevertheless, the bugs mentioned in the paper ought to be fixed, preferably in mainline.

    Comment


    • #12
      Anyone knows what's the situation with Manjaro?

      Comment


      • #13
        Originally posted by nils_ View Post

        It would be interesting to see how it fares in those HPC workloads with a lot of cores compared to CFS. Nevertheless, the bugs mentioned in the paper ought to be fixed, preferably in mainline.
        In HPC we typically bind processes/threads to cores with cgroup cpusets and/or sched_setaffinity().

        Comment


        • #14
          Originally posted by jabl View Post

          In HPC we typically bind processes/threads to cores with cgroup cpusets and/or sched_setaffinity().
          Still no excuse not to fix bugs, especially since this is a workaround..

          Comment


          • #15
            Originally posted by jabl View Post

            In HPC we typically bind processes/threads to cores with cgroup cpusets and/or sched_setaffinity().
            Question: Say you bind a thread to a specific CPU core, and later on, some other high workload task comes along and also gets bound to that same core. Whats the performance loss to your application/system?

            As a software developer, I've NEVER had to resort to binding threads to cores, since doing so pretty much always reduces performance. Then again, I don't develop for Linux anymore...

            Comment


            • #16
              Originally posted by nils_ View Post

              Still no excuse not to fix bugs, especially since this is a workaround..
              Of course. Just saying that HPC doesn't provide a very interesting workload to trigger this particular bug.

              Comment


              • #17
                Originally posted by gamerk2 View Post

                Question: Say you bind a thread to a specific CPU core, and later on, some other high workload task comes along and also gets bound to that same core. Whats the performance loss to your application/system?
                Well, it's not going to be pretty, since you're essentially forcing a situation where two cpu-heavy threads will have to timeshare a single core, whereas other cores are idling.

                As a software developer, I've NEVER had to resort to binding threads to cores, since doing so pretty much always reduces performance. Then again, I don't develop for Linux anymore...
                Yeah, if you can't manage cpu affinity correctly, it's better to not do it at all.

                So to expand a bit on my comment in case it's unclear: A typical approach in HPC is that the batch scheduler (the piece of software that manages the queue of batch jobs and executes them on available compute nodes) sets up a cgroup cpuset for each job. Say, job #1 gets cores #0-7 on node #42 (each job specifies how many cores it needs), job #2 gets cores 8-23 on node #42, etc. Then the cpusets for each job ensures that each job can not use cores allocated to another job. Then the jobs themselves (typically) launch a number of threads, and/or processes matching the number of allocated cores, optionally sets up cpu affinity to bind each process/thread to a core, and distributes work among those cores.

                Now, for a general purpose system, where you, say, launch threads to do some specific task (which might or might not be cpu-heavy), the approach above doesn't really work and it probably is better to just let the kernel schedule the threads on the available resources.

                Comment

                Working...
                X