Originally posted by Svartalf
View Post
Announcement
Collapse
No announcement yet.
BFS Scheduler Benchmarks
Collapse
X
-
Originally posted by Apopas View PostWell, is there any way to check that indeed the BFS scheduler has been applied?
I don't see any difference in my system in comparison with the previous kernel.
No boosts, no slowdowns, no hangs, nothing... while I expected serious problems since I use reiserfs which causes problems with BFS according to Kolivas.
gmane.org is your first and best source for all of the information you’re looking for. From general topics to more of what you would expect to find here, gmane.org has it all. We hope you find what you are searching for!
Then your machine is not affected by the CFS interactivity problems.
Comment
-
Originally posted by RealNC View PostWell, if you had no interactivity problems before, then there's nothing to "improve" in the first place. If you don't have problems like those described here:
gmane.org is your first and best source for all of the information you’re looking for. From general topics to more of what you would expect to find here, gmane.org has it all. We hope you find what you are searching for!
Then your machine is not affected by the CFS interactivity problems.
Comment
-
-
Five pages of posts since I was last here and only ten of them are BFS-related?
Anyway, I've been thinking more about a benchmark for responsiveness. Using cyclictest from the RT Linux Wiki, create threads that sleep for some number of milliseconds that's not an even multiple of HZ, and measure how long it takes for them to actually wake up. Several threads would be created with different SCHED_FIFO priority levels, plus several threads at SCHED_ISO on BFS and SCHED_OTHER on both schedulers. Gather all the delay statistics from the threads (including a histogram of latencies), and plot them on a 3D bar graph with the x axis being thread priority grouped by scheduling queue (i.e. FIFO, RR, ISO, OTHER), y axis being latency, and z axis being frequency of that latency for that thread. Each scheduler would have a graph plotted for no load, medium load, and heavy load, resulting in six graphs which could be visually compared. Then, the minimum, mean, maximum, and standard deviation of latencies would be plotted for the two schedulers and three loads, giving another graph with three lines on it, with a shaded stripe indicating standard deviation around the mean line.
I don't have time to implement this, but it would be really helpful to have something like this in PTS. Any takers? Please?
P.S. I mostly disagree with the way I was quoted by kebabbert. I only think it would be a Good Thing if there was a single point release devoted to optimization, kind of like Snow Leopard. Instead of the usual commit window where everyone is bombarding LKML with new features and drivers, there's a shorter release cycle where all the subsystem maintainers engage on a virtuous and heroic quest to seek out latencies and hidden bugs in their respective domains. Yes, I know it's just a romantic way of describing a code audit, but marketing works, you know? I didn't intend to suggest that bug fixing doesn't happen.
Plus, as a kernel developer* I would like to have a subset of the kernel API that I know won't change for X years, to reduce my maintenance costs and allow me to focus on cool new ideas.
*I'm a kernel developer in the sense that I write code that runs in the kernel, not in the sense that I participate in LKML and influence mainline.Last edited by unix_epoch; 01 October 2009, 07:32 PM.
Comment
Comment