Announcement

Collapse
No announcement yet.

New EEVDF Linux Scheduler Patches Make It Functionally "Complete"

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by uid313 View Post
    I use the Ubuntu kernel so I don't know whether it has MGLRU activated or not. I do not have a swap. Running swapon --show outputs nothing.
    The Ubuntu kernel has MGRLU enabled by default starting with Mantic 23.10, see https://bugs.launchpad.net/bugs/2023629.

    You can also double check via sysfs in /sys/kernel/mm/lru_gen/enabled.

    Comment


    • #22
      Originally posted by arighi View Post

      The Ubuntu kernel has MGRLU enabled by default starting with Mantic 23.10, see https://bugs.launchpad.net/bugs/2023629.

      You can also double check via sysfs in /sys/kernel/mm/lru_gen/enabled.
      It is not really enabled by Ubuntu; instead, it depends solely on the kernel version. MGLRU was merged for the 6.1 kernel and it is by default enabled also on Ubuntu kernel 6.5 (for some reason Ubuntu was disabled it although it is enable by default in kernel). Although older versions of Ubuntu may have older kernel version, any recent one can be easily installed from deb packages available at: https://kernel.ubuntu.com/mainline/?C=N;O=D
      Last edited by Jakobson; 06 April 2024, 11:43 AM.

      Comment


      • #23
        Originally posted by uid313 View Post
        Great, but will my system still freeze under heavy load?

        Under heavy load my system freezes to the point that I can't move my mouse cursor, I cannot switch to another virtual terminal, and I cannot kill the offending process. All I can do is wait or REISUB.
        You might be able to improve the responsiveness of your system with my project, Simple Slices. It enables priority control in several places where it's disabled by default, and can be used to run certain applications or commands with higher or lower priorities. If you install the simple-slices-preset-desktop package, graphical applications will be given preferential access to system resources by default. (Note: reboot your system after installing any of the packages!)

        I just added the "deb" make target to build install-able .deb packages in the build/deb directory, so installing it is now pretty easy. I don't consider the project to be ready for "production" yet, but I've been using it on all of my systems for over a year. I hope I'm not becoming annoying about promoting my project, but it doesn't have a lot of visibility yet.

        Comment


        • #24
          I think a lot of people here misunderstand how the scheduler and CGroups work. I started writing an explanation and correcting the misunderstandings but it was way too long. Instead I'll just give a TLDR here and make a much longer post somewhere else in the future.

          TL;DR: Changing the scheduler is much less effective than properly configuring priority values, and if you care enough to change the scheduler you should really just configure the priority values instead. Also, ionice doesn't do anything by default.

          Comment


          • #25
            Currently running some quick kbuild (-j32 on 5950X) tests 6.9-rc2 with patchset applied. Sysload is all over the place. Ksystemstats gives weird readings of cpu usage. Goes up to 100% per core, doesn't settle down on re-idle. Other than that, desktop performance seems to be absolutely unaffected from the kbuild. Going to slam some games at it for further testing after sleep.

            Comment


            • #26
              Originally posted by freerunner View Post
              Currently running some quick kbuild (-j32 on 5950X) tests 6.9-rc2 with patchset applied. Sysload is all over the place. Ksystemstats gives weird readings of cpu usage. Goes up to 100% per core, doesn't settle down on re-idle. Other than that, desktop performance seems to be absolutely unaffected from the kbuild. Going to slam some games at it for further testing after sleep.
              we wait patently

              Comment


              • #27
                This very much includes the new interface that exposes the extra parameter that EEVDF has. I've chosen to use sched_attr::sched_runtime for this
                using too short a request size will increase job preemption overhead, using too long a request size will decrease timeliness
                Is he talking about the smallest amount of time that a process can be scheduled for? If so, didn't CFS already have that with quota periods?

                Comment


                • #28
                  Originally posted by ptr1337 View Post

                  NEST is provided by the scx-scheds (sched-ext Framework), but sadly doesnt provide that good results.

                  There are other interesting schedulers in scx-scheds tho, mainly scx_lavd (Latency Sensitive Tasks, the work is funded by Valve) and scx_rusty / scx_rustland.
                  Be aware that scx_lavd doesnt have currently a proper handling if the CPU has multiple CXX's.

                  Michael I think it would be worth providing some informations/news about the sched-ext development, they are working really great on it and the example schedulers, like the above mentioned are in a good state nowadays.
                  Thanks, I was not aware that NEST used the sched_ext framework.
                  Come to think about it , I wonder if not much of what NEST is trying to achieve can be done by simply temporarily offlining CPU's that are not needed and thus avoiding spreading tasks over other CPU's that may be in a low power state.
                  That way the online cpu's would get all the load and stay at a relatively busy all the time, and if cpu load goes down you simply offline more CPU's again.
                  I have actually written a (crude) C implementation (look ma, no rust!) that does exactly that just for fun, but have never tested it scientifically

                  http://www.dirtcellar.net

                  Comment


                  • #29
                    Originally posted by waxhead View Post

                    Thanks, I was not aware that NEST used the sched_ext framework.
                    Come to think about it , I wonder if not much of what NEST is trying to achieve can be done by simply temporarily offlining CPU's that are not needed and thus avoiding spreading tasks over other CPU's that may be in a low power state.
                    That way the online cpu's would get all the load and stay at a relatively busy all the time, and if cpu load goes down you simply offline more CPU's again.
                    I have actually written a (crude) C implementation (look ma, no rust!) that does exactly that just for fun, but have never tested it scientifically
                    Currently the scx_nest implentation is not "that good", since it lacks in benchmarks sometimes.
                    If you keep the CPU "warm" with little load, it can be quite equal to others tho.

                    scx_rusty, scx_lavd and scx_rustland are the most maintained and interesting ones right now.

                    Comment


                    • #30
                      Originally posted by kiffmet View Post
                      uid313 Use the BORE scheduler. It's fast and stable, while preserving responsiveness under load without hurting throughput too much. I can compile code in the background with thread oversubscription and use my computer like normal while the CPU is under full load.
                      BORE scheduler is wonderful for desktops, but I think it's possible to configure generic scheduler from user space to behave similar way. BORE was CFS and now EEVDF with some modifications btw.

                      Comment

                      Working...
                      X