Announcement

Collapse
No announcement yet.

Con Kolivas Contemplates Ending Kernel Development, Retiring MuQSS & -ck Patches

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • kiffmet
    replied
    Originally posted by RealNC View Post

    What on earth does MuQSS (a process scheduler) have to do with BFQ/PDS (I/O scheduler.)
    Sorry, I meant BMQ and mistyped. Both BMQ and PDS are available as a single combined patch, called ProjectC. PDS is a process scheduler aswell btw. You can choose between the two and stock CFS via your preferred way of configuring kernel options before build.
    Last edited by kiffmet; 02 September 2021, 06:46 PM.

    Leave a comment:


  • waxhead
    replied
    Originally posted by avem View Post
    I don't know man. A company which has thousands high-performance dual-socket servers with terabytes of storage (including iSCSI) may have different preferences in terms of executing IO requests than a person with an average laptop. I might be wrong, of course.
    I don't think you are wrong. I think you are absolutely correct - they do have different preferences, but that is usually for a reason right? So if that reason goes away regardless if it is process or I/O scheduling it becomes mostly a non-issue. The downside is that it will be less fun and less rewarding to not be able to experiment.

    Leave a comment:


  • avem
    replied
    Originally posted by waxhead View Post

    Alright - the reason is that we should not have multiple process schedulers because we then duplicate a system that creates more configuration problems than it solves.
    Ideally we should not have more than one IO scheduler either and the only good reason to replace it is if there is something new that is better in every way.

    I must admit that I love to fiddle with IO schedulers and I really like the "hotswappable" policy, but really - when you think about it - it is fundamentally wrong the way I see it.
    I don't know man. A company which has thousands high-performance dual-socket servers with terabytes of storage (including iSCSI) may have different preferences in terms of executing IO requests than a person with an average laptop. I might be wrong, of course.

    Leave a comment:


  • waxhead
    replied
    Originally posted by avem View Post

    We already have multiple IO schedulers. Please find a reason why we can't have multiple process schedulers.
    Alright - the reason is that we should not have multiple process schedulers because we then duplicate a system that creates more configuration problems than it solves.
    Ideally we should not have more than one IO scheduler either and the only good reason to replace it is if there is something new that is better in every way.

    I must admit that I love to fiddle with IO schedulers and I really like the "hotswappable" policy, but really - when you think about it - it is fundamentally wrong the way I see it.

    Leave a comment:


  • binarybanana
    replied
    Wow, that sucks. I've been using -ck for a few release cycles now and it improves on latency massively. It even improves FPS in a lot of games. Maybe there are some knobs to tweak CFS to get similar results, but I'm not so sure. Patching -ck to work on 5.13 is relatively easy, but going forward that might not be the case for long. Seems like 5.14 already makes it difficult. I hope MuQSS gets maintained by others as part of Xanmod and the like at least, or I have to try some of the other schedulers. CFS is noticeably worse on desktops if you value interactivity/liow latency and even throughput in the cases.

    Leave a comment:


  • F.Ultra
    replied
    Originally posted by avem View Post

    It's great that everyone here quotes Torvalds exclusively but you know in any conflict there are two sides and no one has posted anything by Con Kolivas as if there's just his scheduler created out of nowhere with no one behind it. Also, it looks like folks here strongly imply Torvalds is infallible.

    I'm not skilled enough to comment on anything Torvalds says but the way I see it, you choose a scheduler on boot and that's it. Does it matter what data structures it operates with when those are not used by anything else or are they?
    No one here quote Kolivas responses to Torvalds claims due to him not have made such responses, the complete thread of discussions on lkml was available on the link that was posted with the first quote.

    Also no one is claiming that Torvalds is infallible, he is though regardless of if you like it or not, the gatekeeper here, so if you want to know why the patches was never accepted, he is definitely the one person to go to for an explanation. And as he wrote "And hey, you can try to prove me wrong. Code talks. So far, nobody has really ever come close.", still 14 years later and no one have done that.

    And as Torvalds wrote, yes data structures matter when it comes to the scheduler, performance will drop for everyone. Having something be pluggable always have a performance impact, for IO schedulers that is not a problem since the IO have such a large overhead anyway.

    Leave a comment:


  • polarathene
    replied
    One issue with MuQSS (and some others like BMQ I think?) was lacking/broken cgroup support. I think it was just CPU accounting was inaccurate, but that impacted software like some OOM tools at least with that PSI metric IIRC, prevented some niceness software working correctly (anicey or something like that), I think it possibly affected Docker and/or some disk I/O scheduler features in BFQ.. mostly things that go on under the hood to provide better functionality/experience.

    MuQSS (and IIRC, BMQ) stubbed the cgroup stuff with no intention to support it. I think because their approaches conflicted with that functionality or something.. I remember reports of process monitors getting inaccurate CPU usage readings too, brief look at my notes this is apparently due to mixing MuQSS with full tickless kernels. Might throw off CPU governor like schedutil too I guess?

    For casual users, those issues might not matter and they get a positive outcome with no drawbacks, but for some devs/deployment machines that tradeoff might be acceptable.

    ---

    I assume financial support won't motivate the work to continue, otherwise he could probably give that a shot if there's enough interest to back it.

    Leave a comment:


  • linner
    replied
    All of the mainline I/O schedulers still freeze up the whole machine under heavy/large write tasks. It's been like this through decades of various hardware and annoys the hell out of me.

    Never tried the -ck patches so I don't know there.

    Leave a comment:


  • indepe
    replied
    Another reason to have only one (official) process/thread scheduler is that programs, if their performance depends on the interaction with the scheduler, will get optimized for that scheduler. So if you have one set of programs optimized for one scheduler, and another set of programs optimized for a different scheduler, you are likely to have performance problems when running both on the same scheduler. And you can't have two schedulers running at the same time. (Although you might perhaps have more options on the official scheduler in so far as that makes sense, but then that's an area where "as simple as possible" is often a good recipe.)

    Leave a comment:


  • pal666
    replied
    Originally posted by avem View Post
    I'm not skilled enough to comment on anything Torvalds says
    he listed number of valid reasons
    Originally posted by avem View Post
    but the way I see it, you choose a scheduler on boot and that's it. Does it matter what data structures it operates with when those are not used by anything else or are they?
    they at least take space(grep for "cache footprint" in torvalds quote) and to chose you have to do indirection(grep for "indirect pointers" again). and it complicates code, which is also a downside(it's hard for people to work with complicated code)

    Leave a comment:

Working...
X