Announcement

Collapse
No announcement yet.

Con Kolivas Contemplates Ending Kernel Development, Retiring MuQSS & -ck Patches

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    On my 3900X MuQSS performs significantly worse than BMQ and PDS in terms of 7z compression throughput (single&multi), aswell as game frametimes, but I am sure that there are many users who do actually benefit from it (i.e. Intel CPUs, single CCX Ryzen).
    Last edited by kiffmet; 02 September 2021, 06:40 PM. Reason: Mistyped BMQ as BFQ

    Comment


    • #12
      It's always sad when somebody steps away from something interesting like this. However, I gotta say that mainline (Arch) Linux kernel with plain CFS scheduler never left anything to be desired for me with 6700K/11400F/N4020 CPUs.
      Last edited by aufkrawall; 31 August 2021, 09:07 AM.

      Comment


      • #13
        I think it is sad that Con is considering dropping his patchset , but really - is BFS and/or MuQSS really better than the mainline scheduler?! I have not been interested enough to try Con's patches myself , but from what I read years ago benchmarking his scheduler with the CFS did not give any significant improvement. And the very fact that the MuQSS scheduler exists is proof that the BFS needed some adjustments.

        So if BFS/MuQSS performs worse ? why use it? - well if perceived performance is a thing then maybe there is a future for that. Maybe with the modern CPU's of today with more than 4 CPU cores it would be possible to split the computer into two scheduling domains. One for performance, and one for perceived performance e.g. running GUI / Desktop interfaces on MuQSS for smoothness and chewing operations can chug along with higher latency on other cores. That would however require changes to just about any software out there so I don't see that happening.

        http://www.dirtcellar.net

        Comment


        • #14
          Originally posted by tomas View Post

          And you know this because...?
          Are you a contributor to the Linux kernel process scheduler? Or at least very familiar with the current codebase on a technical level? Just curious.

          This has been discussed here on Phoronix in-depth. I'm not a kernel hacker but again from what I've heard from experienced programmers here, in Reddit and on Hacker News there are no technical reasons or obstacles in mainlining his code. Someone in the mailing list objected to the very idea of being able to switch process schedulers on boot (I guess it's not that difficult to do it even at runtime - after all the kernel is perfectly able to suspend and resume all running processes). That's how it ended. No other reasons have ever been given.
          Last edited by avem; 31 August 2021, 09:31 AM.

          Comment


          • #15
            Originally posted by kiffmet View Post
            On my 3900X MuQSS performs significantly worse than BFQ and PDS in terms of 7z compression throughput (single&multi), aswell as game frametimes, but I am sure that there are many users who do actually benefit from it (i.e. Intel CPUs, single CCX Ryzen).
            What on earth does MuQSS (a process scheduler) have to do with BFQ/PDS (I/O scheduler.)

            Comment


            • #16
              Originally posted by avem View Post
              ... from experienced programmers here, in Reddit and on Hacker News there are no technical reasons or obstacles in mainlining his code
              Yes, that sounds like real authorities on the matter. 😊
              Someone in the mailing list objected to the very idea of being able to switch process schedulers on boot (I guess it's not that difficult to do it even at runtime....
              That someome is for example Linus Torvalds:

              https://lore.kernel.org/lkml/Pine.LN...oundation.org/

              No. Really.
              I absolutely *detest* pluggable schedulers. They have a huge downside:
              they allow people to think that it's ok to make special-case schedulers.
              And I simply very fundamentally disagree.

              If you want to play with a scheduler of your own, go wild. It's easy
              (well, you'll find out that getting good results isn't, but that's a
              different thing). But actual pluggable schedulers just cause people to
              think that "oh, the scheduler performs badly under circumstance X, so
              let's tell people to use special scheduler Y for that case"....
              Also see this from Torvalds regarding choosing schedulers:

              “People who think SD was ‘perfect’ were simply ignoring reality,” Linus Torvalds began in a succinct explanation as to why he chose the CFS scheduler written by Ingo Molnar instead of the SD scheduler written by Con Kolivas. He continued, “sadly, that seemed to include Con too, which was one of the main reasons that …


              “People who think SD was ‘perfect’ were simply ignoring reality,” Linus Torvalds began in a succinct explanation as to why he chose the CFS scheduler written by Ingo Molnar instead of the SD scheduler written by Con Kolivas. He continued, “sadly, that seemed to include Con too, which was one of the main reasons that I never [entertained] the notion of merging SD for very long at all: Con ended up arguing against people who reported problems, rather than trying to work with them.”
              So it also seems to have been a "people" issue.
              Last edited by tomas; 31 August 2021, 10:52 AM.

              Comment


              • #17
                Originally posted by tomas View Post

                Yes, that sounds like real authorities on the matter. 😊


                That someome is for example Linus Torvalds:





                etc...


                So, no technical reasons at all, just Linus demanding there must be a single scheduler which can be tuned/modified to accommodate all possible workloads perfectly.

                At the same time if we had both at the same time, it would be easier to actually improve the existing one because more people could find corner cases and report them. With no alternative schedulers people don't test and don't report and the existing scheduler is not getting improvements.

                Right-o!

                Comment


                • #18
                  Originally posted by avem View Post

                  So, no technical reasons at all, just Linus demanding there must be a single scheduler which can be tuned/modified to accommodate all possible workloads perfectly.

                  At the same time if we had both at the same time, it would be easier to actually improve the existing one because more people could find corner cases and report them. With no alternative schedulers people don't test and don't report and the existing scheduler is not getting improvements.

                  Right-o!
                  Incorrect. Where do you think "A decade of wasted cores" come from? ref.: https://www.phoronix.com/scan.php?pa...-Scheduler-Bad
                  One might just as easily argue that If there is one solution and that behaves sub-optimally on hardware 'xyz' or workload 'abc' more people are likely to be fed up and do something about it.

                  http://www.dirtcellar.net

                  Comment


                  • #19
                    Originally posted by waxhead View Post

                    Incorrect. Where do you think "A decade of wasted cores" come from? ref.: https://www.phoronix.com/scan.php?pa...-Scheduler-Bad
                    One might just as easily argue that If there is one solution and that behaves sub-optimally on hardware 'xyz' or workload 'abc' more people are likely to be fed up and do something about it.
                    We already have multiple IO schedulers. Please find a reason why we can't have multiple process schedulers.

                    Comment


                    • #20
                      I won't be sad about this. Tried -ck a few times, but everything felt slower than normal. Xanmod still has the best kernel IMHO.

                      Comment

                      Working...
                      X