Announcement

Collapse
No announcement yet.

BFS Scheduler Benchmarks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • mutlu_inek
    replied
    This test reinforces the wrong way of doing things

    Many have hinted at the problem with this benchmark and with Ingo Molnar's treatment of Con's contribution: Con is not interested in pushing disk throughput or cpu performance. The point of BFS is to have a system that does not skip, lock or drop frames during high cpu load. Despite great advances in scheduling and despite faster and faster cpus, even multi-core systems, we still experience latency when several things are happening at the same time. These could be cron jobs like man-db or updatedb blocking the disk, compilations running in the background or many other uses of the cpu that should be rather fast, but _never_ block the user interface. So far optimizations have intended to make the system fast is absolute terms and to be fair in the sense that all processes get equal access to the cpu, but what we need is that certain processes need to take a (short) break if we do something with our mouse, the keyboard, or while watching videos and listening to music.

    Benchmarks measuring performance are detrimental to Con's effort as they reinforce the way things have been done before: get the last bit out of the cpu, at the expense of the user experience. Being fast in absolute terms is making my experience of the system worse as I am annoyed by unresponsive UIs or dropped frames while I am unable to measure whether a certain process took a few more seconds to complete. I am really surprised that the BFS did so well in this test. Nonetheless, this does not speak for or against the BFS as its goal is completely unrelated.

    Leave a comment:


  • sega01
    replied
    Wow, I'm sorry.

    I misunderstood the article and misremembered a few things. I was thinking of CFQ; I thought BFS was a I/O scheduler. Sorry!

    Much more interested in BFS now.

    Thanks for pointing that out, but I'm sorry for my error.

    Thanks,

    Leave a comment:


  • b15hop
    replied
    I don't believe you did a proper review comparing speed of 3D applications like games. Maybe a few more would have helped? Also, why the high resolution for pandman? Should be lowest res to max out cpu, not gpu..

    Leave a comment:


  • Ant P.
    replied
    Originally posted by sega01 View Post
    Nice post, but it would have been nice to see BFS compared to say deadline (and maybe Anticipatory). I've been using deadline for ages and it has been quite good to me (sounds like possibly a similair design to BFS, too), and I wonder how close in performance it is to BFS. But thanks for letting me know about BFS; might be useful (just want to know if there is much difference from deadline to BFS though).
    I don't see how you can create a meaningful benchmark for comparing a process scheduler to a disk IO scheduler.

    Leave a comment:


  • pavlinux
    replied
    --- a/kernel/kthread.c
    +++ b/kernel/kthread.c
    @@ -16,7 +16,7 @@
    #include <linux/mutex.h>
    #include <trace/events/sched.h>

    -#define KTHREAD_NICE_LEVEL (-5)
    +#define KTHREAD_NICE_LEVEL (0)


    Replace in BFS patch, 0 to -5 and benchmark again.

    Leave a comment:


  • ronj
    replied
    Being a RT kernel user for audio stuff (under Ubuntu Studio), I'm interested in lower latencies under heavy load. But like other people here, I'm wondering if the benches proposed here really evaluate the latency benefits of a new scheduler.
    Solitary's suggestion seems better (and fun), and as an additional latency performance indicator, it could be interesting to compare the number of buffer underruns between BFS and CFS for a standardized recording session.
    I'll inform the ubuntustudio-devel mailing list about this article, maybe something interesting can arise.

    Is there anyone here with a RT kernel, able to comment on general responsiveness?
    Thanks.

    Leave a comment:


  • kraftman
    replied
    Originally posted by mibo View Post
    I'm also wondering how "important" these benchmarks are.

    Cheers,
    mibo
    Yes, those results are very interesting, but little strange same time . Ubuntu (with bfs) vs OS X benchmark would look fine, but the question is if it would be a fair and meaningful comparison (scheduler maybe is "cheating" etc. ).

    With fair_sleepers disabled CFS is faster here in apache test

    @eugene2k

    If you read LKML discussion of BFS, it seems that Ingo has fixed a couple of interactivity issues with CFS. One problem really seems to be the fair_sleepers, so for now Ingo turned it off. I'm not sure which branch of the kernel that was though
    AFAIK Ingo made request to disable fair_sleepers for .32.
    Last edited by kraftman; 09-14-2009, 07:54 AM.

    Leave a comment:


  • sega01
    replied
    Nice post, but it would have been nice to see BFS compared to say deadline (and maybe Anticipatory). I've been using deadline for ages and it has been quite good to me (sounds like possibly a similair design to BFS, too), and I wonder how close in performance it is to BFS. But thanks for letting me know about BFS; might be useful (just want to know if there is much difference from deadline to BFS though).

    Thanks,
    Teran

    Leave a comment:


  • mibo
    replied
    I'm also wondering how "important" these benchmarks are.
    For me as a user I want the following from the scheduler:

    1. No sound or video hick up when doing something in the background (even heavy disk load)

    2. Good overall system responsiveness even with heavy load in the background

    If my kernel compilation (the heavy load in the background) takes some seconds longer - I don't care.

    Cheers,
    mibo

    Leave a comment:


  • Solitary
    replied
    Don't we really want to measure the input latency of the different schedulers and compare this aspect of them?

    The methodology could be inspired by this article. Console Gaming: The Lag Factor - http://www.eurogamer.net/articles/di...factor-article

    All we need is a mouse or keyboard that blinks when pressed (maybe binding fire to Caps Lock). A camera capable of recording video in 60 FPS, and a CRT monitor.

    The process seems pretty simple, just count the frames between pushing fire and the action on screen, then multiply the counted number of frames with 16,67 milliseconds (1/60), which gives total input lag in milliseconds, or am I missing something?
    Last edited by Solitary; 09-14-2009, 08:31 AM. Reason: Double post, one post deleted, typos.

    Leave a comment:

Working...
X