Announcement

Collapse
No announcement yet.

BFS Scheduler Benchmarks

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Five pages of posts since I was last here and only ten of them are BFS-related?

    Anyway, I've been thinking more about a benchmark for responsiveness. Using cyclictest from the RT Linux Wiki, create threads that sleep for some number of milliseconds that's not an even multiple of HZ, and measure how long it takes for them to actually wake up. Several threads would be created with different SCHED_FIFO priority levels, plus several threads at SCHED_ISO on BFS and SCHED_OTHER on both schedulers. Gather all the delay statistics from the threads (including a histogram of latencies), and plot them on a 3D bar graph with the x axis being thread priority grouped by scheduling queue (i.e. FIFO, RR, ISO, OTHER), y axis being latency, and z axis being frequency of that latency for that thread. Each scheduler would have a graph plotted for no load, medium load, and heavy load, resulting in six graphs which could be visually compared. Then, the minimum, mean, maximum, and standard deviation of latencies would be plotted for the two schedulers and three loads, giving another graph with three lines on it, with a shaded stripe indicating standard deviation around the mean line.

    I don't have time to implement this, but it would be really helpful to have something like this in PTS. Any takers? Please?


    P.S. I mostly disagree with the way I was quoted by kebabbert. I only think it would be a Good Thing if there was a single point release devoted to optimization, kind of like Snow Leopard. Instead of the usual commit window where everyone is bombarding LKML with new features and drivers, there's a shorter release cycle where all the subsystem maintainers engage on a virtuous and heroic quest to seek out latencies and hidden bugs in their respective domains. Yes, I know it's just a romantic way of describing a code audit, but marketing works, you know? I didn't intend to suggest that bug fixing doesn't happen.

    Plus, as a kernel developer* I would like to have a subset of the kernel API that I know won't change for X years, to reduce my maintenance costs and allow me to focus on cool new ideas.

    *I'm a kernel developer in the sense that I write code that runs in the kernel, not in the sense that I participate in LKML and influence mainline.
    Last edited by unix_epoch; 10-01-2009, 07:32 PM.

    Comment

    Working...
    X