Announcement

Collapse
No announcement yet.

New EEVDF Linux Scheduler Patches Make It Functionally "Complete"

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • kiffmet
    replied
    Originally posted by Volta View Post

    BORE scheduler is wonderful for desktops, but I think it's possible to configure generic scheduler from user space to behave similar way. BORE was CFS and now EEVDF with some modifications btw.
    It's been since BORE has been rebased on top of EEVDF that it has become really, really good. I dare even say almost perfect on my 5900X. PRJC PDS/BMQ, MuQSS and CacULE had issues either with .1% FPS, frametime consistency or system responsiveness under full load the last time I tested them.

    Leave a comment:


  • geearf
    replied
    Originally posted by Britoid View Post
    I know there is third party implementations that do it, but imho it should be built into desktops that they somehow inform the scheduler of the current focussed application so that it can receive priority.
    I don't think it should only be the focused application, but anything running that the user experiences, ex: movie playing in the background may not be focused but may be heard or even visible if focused application is see-through.

    Leave a comment:


  • Volta
    replied
    ATLief

    Exactly. I think distributions should be better at providing such setups out of the box.

    Leave a comment:


  • Volta
    replied
    Originally posted by kiffmet View Post
    uid313 Use the BORE scheduler. It's fast and stable, while preserving responsiveness under load without hurting throughput too much. I can compile code in the background with thread oversubscription and use my computer like normal while the CPU is under full load.
    BORE scheduler is wonderful for desktops, but I think it's possible to configure generic scheduler from user space to behave similar way. BORE was CFS and now EEVDF with some modifications btw.

    Leave a comment:


  • ptr1337
    replied
    Originally posted by waxhead View Post

    Thanks, I was not aware that NEST used the sched_ext framework.
    Come to think about it , I wonder if not much of what NEST is trying to achieve can be done by simply temporarily offlining CPU's that are not needed and thus avoiding spreading tasks over other CPU's that may be in a low power state.
    That way the online cpu's would get all the load and stay at a relatively busy all the time, and if cpu load goes down you simply offline more CPU's again.
    I have actually written a (crude) C implementation (look ma, no rust!) that does exactly that just for fun, but have never tested it scientifically
    Currently the scx_nest implentation is not "that good", since it lacks in benchmarks sometimes.
    If you keep the CPU "warm" with little load, it can be quite equal to others tho.

    scx_rusty, scx_lavd and scx_rustland are the most maintained and interesting ones right now.

    Leave a comment:


  • waxhead
    replied
    Originally posted by ptr1337 View Post

    NEST is provided by the scx-scheds (sched-ext Framework), but sadly doesnt provide that good results.

    There are other interesting schedulers in scx-scheds tho, mainly scx_lavd (Latency Sensitive Tasks, the work is funded by Valve) and scx_rusty / scx_rustland.
    Be aware that scx_lavd doesnt have currently a proper handling if the CPU has multiple CXX's.

    Michael I think it would be worth providing some informations/news about the sched-ext development, they are working really great on it and the example schedulers, like the above mentioned are in a good state nowadays.
    Thanks, I was not aware that NEST used the sched_ext framework.
    Come to think about it , I wonder if not much of what NEST is trying to achieve can be done by simply temporarily offlining CPU's that are not needed and thus avoiding spreading tasks over other CPU's that may be in a low power state.
    That way the online cpu's would get all the load and stay at a relatively busy all the time, and if cpu load goes down you simply offline more CPU's again.
    I have actually written a (crude) C implementation (look ma, no rust!) that does exactly that just for fun, but have never tested it scientifically

    Leave a comment:


  • EphemeralEft
    replied
    This very much includes the new interface that exposes the extra parameter that EEVDF has. I've chosen to use sched_attr::sched_runtime for this
    using too short a request size will increase job preemption overhead, using too long a request size will decrease timeliness
    Is he talking about the smallest amount of time that a process can be scheduled for? If so, didn't CFS already have that with quota periods?

    Leave a comment:


  • Quackdoc
    replied
    Originally posted by freerunner View Post
    Currently running some quick kbuild (-j32 on 5950X) tests 6.9-rc2 with patchset applied. Sysload is all over the place. Ksystemstats gives weird readings of cpu usage. Goes up to 100% per core, doesn't settle down on re-idle. Other than that, desktop performance seems to be absolutely unaffected from the kbuild. Going to slam some games at it for further testing after sleep.
    we wait patently

    Leave a comment:


  • freerunner
    replied
    Currently running some quick kbuild (-j32 on 5950X) tests 6.9-rc2 with patchset applied. Sysload is all over the place. Ksystemstats gives weird readings of cpu usage. Goes up to 100% per core, doesn't settle down on re-idle. Other than that, desktop performance seems to be absolutely unaffected from the kbuild. Going to slam some games at it for further testing after sleep.

    Leave a comment:


  • ATLief
    replied
    I think a lot of people here misunderstand how the scheduler and CGroups work. I started writing an explanation and correcting the misunderstandings but it was way too long. Instead I'll just give a TLDR here and make a much longer post somewhere else in the future.

    TL;DR: Changing the scheduler is much less effective than properly configuring priority values, and if you care enough to change the scheduler you should really just configure the priority values instead. Also, ionice doesn't do anything by default.

    Leave a comment:

Working...
X