I think the paper does a very good job at showing why writing a competent, if not a good, scheduler is so hard. Cache coherency, processor pipelining, interrupt latency, thermal throttling etc, there are many factors at work. When there were just one or two CPUs with independent cache and no HT , things were more deterministic.
Announcement
Collapse
No announcement yet.
Is The Linux Kernel Scheduler Worse Than People Realize?
Collapse
X
-
Originally posted by hmijail View PostOlafLostViking, the point is exactly that the reason why you have to do the pinning is because the scheduler is doing a bad job. Once the problems are fixed, the pinning should be unnecessary.
I just wanted to point out that some dude running his HPC code at some super computer/cluster will likely not see these (nice) results since he's already very likely using some kind of pinning (as you said, a bad scheduler being a reason for that). So I just wanted to prevent disappointments due to misunderstandings and we both do agree that this paper seems to be a very nice work. Have a nice day
Comment
-
Originally posted by duby229 View Post
I guess the differentiation between io and process is less clear in my mind than it is in yours. And I suppose that is why I see a problem.
For in memory loads, and for the kernel and developer, achieving that is a very high priority task, the elevator never gets called.
I haven't finished reading the paper yet, but the issue is EXACTLY that the scheduler is starving runnable processes for no good reason. It's not about resources, which might cause the elevator to get involved, it's only about minimizing the wall clock time that each job needs to be run.
It's good this is coming up because the kernel is undergoing some pretty major changes in the scheduling area (the longtime coming scheduler directed dvfs and cpuidle). That alone is a pretty big task, but they sure as hell need to keep the issues in mind that this paper brings up so that the scheduler itself can make the best possible decisions.
Comment
-
Originally posted by liam View PostIt's good this is coming up because the kernel is undergoing some pretty major changes in the scheduling area (the longtime coming scheduler directed dvfs and cpuidle). That alone is a pretty big task, but they sure as hell need to keep the issues in mind that this paper brings up so that the scheduler itself can make the best possible decisions.
Comment
-
or a shocking 137x performance
Comment
-
Originally posted by johnc View PostClearly what we need here is SchedulerD.Last edited by SystemCrasher; 16 April 2016, 05:29 PM.
Comment
-
Originally posted by alpha_one_x86 View PostIt's why (and optimisation allowed by mono cpu into no SMP kernel) it's better to have 4x single core than 1x quad core
No contention problem, no balancing problem. At least it's not problem into server world and vm.
- Likes 2
Comment
-
Originally posted by johnc View PostClearly what we need here is SchedulerD.
But with systemd reinventing the wheel is never out of the question, maybe they will even replace lua based stateful policies for something that looks a lot like INI config. Oh wait, they kinda have that already assuming that everything runs as a service.
Comment
Comment