Announcement

Collapse
No announcement yet.

ULatencyD Enters The Linux World

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • ULatencyD Enters The Linux World

    Phoronix: ULatencyD Enters The Linux World

    Daniel Poelzleithner has announced to the Linux kernel world his new project named ulatencyd. The focus of ulatencyd is to provide a script-able daemon to dynamically adjust Linux scheduling parameters and other aspects of the Linux kernel...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    I'm keen to see this adopted into the kernel.

    Comment


    • #3
      damn, something like this should have been made long ago. not part of kernel maybe (while this wouldn't hurt) but anyway. i hope, however it will be tight closer with systemd or systemd author will reimplement it in even cleaner way (continuing his userspace alternative to famous "cgroup latency ~200 liner").

      Comment


      • #4
        After so many years of nobody caring about latency, suddenly we're surround by projects aiming to improve it. What the hell is going on?

        Comment


        • #5
          so.... what exactly is different about this as compared to changing the nice level of programs? i don't understand what the benefit of this is

          Comment


          • #6
            Originally posted by devius View Post
            After so many years of nobody caring about latency, suddenly we're surround by projects aiming to improve it. What the hell is going on?
            They got a taste of BFS and liked it

            Comment


            • #7
              why ulatencyd

              Hi,

              the first reason why nothing like this happend before was, that there was just nothing you can do. Cgroups is the first kernel interface that gives userspace enough power to control kernel behavior in a way that gives good results. In the good old ages there was a renice daemon, but this can't protect you enough in rough cases.
              Heuristic analyzing of the system is something that should never be in the kernel. In fact, everything that you can put in userspace without too much cost on runtime, should be put there.
              The reason I don't want it in init (systemd, upstart, etc) is, that init is the most important program of the system. In my opinion should it be as slick as possible, especially heuristics are something that really don't belong there.
              But of course, I agree that a good interface between the init daemon and ulatencyd will benefit. I just haven't implement a dbus interface yet, and no good ideas how it should look like.

              About systemd: I'm a little bit unsure about their use of cgroups and the main purpose seems to make sure they can kill a daemon completely which seems a little bit awkward to me.

              BTW: I was able to write a rule in one evening that protects the computer form swap of death, at least when a process is eating all your memory. For the case that a group of small processes is tearing you down, it does not work yet, but rules for that are in the pipeline :-)

              Comment


              • #8
                Originally posted by schmidtbag View Post
                so.... what exactly is different about this as compared to changing the nice level of programs? i don't understand what the benefit of this is
                The benefit is this has the potential to actually work.

                Nice level probably doesn't do what you're thinking of, if you're talking about it in the context of user-visible latency. Adjusting timeslice lengths to be longer for preferred programs creates this situation where other threads have a shorter slice. As timeslices approach zero, cache thrash, and scehduling overhead approach infinity.

                Also,Linus chimes in:

                "There really isn't anything to fix. 'nice' is what it is. It's a
                simple legacy interface to scheduler priority. The fact that it's also
                almost totally useless is irrelevant. It's like male nipples. We
                wouldn't be better off lactating, and they look like some odd wart
                that doesn't do much good. But it would be worse to remove it."
                -http://article.gmane.org/gmane.linux.kernel/1071951

                "But the fundamental issue is that 'nice' is broken. It's very much broken at a conceptual and technical design angle (absolute priority levels, no fairness), but it's broken also from a psychological and practical angle (ie expecting people to manually do extra work is ridiculous and totally unrealistic)."
                -http://lwn.net/Articles/418739/

                Comment


                • #9
                  Originally posted by Wyatt View Post
                  The benefit is this has the potential to actually work.

                  Nice level probably doesn't do what you're thinking of, if you're talking about it in the context of user-visible latency. Adjusting timeslice lengths to be longer for preferred programs creates this situation where other threads have a shorter slice. As timeslices approach zero, cache thrash, and scehduling overhead approach infinity.

                  Also,Linus chimes in:

                  "There really isn't anything to fix. 'nice' is what it is. It's a
                  simple legacy interface to scheduler priority. The fact that it's also
                  almost totally useless is irrelevant. It's like male nipples. We
                  wouldn't be better off lactating, and they look like some odd wart
                  that doesn't do much good. But it would be worse to remove it."
                  -http://article.gmane.org/gmane.linux.kernel/1071951

                  "But the fundamental issue is that 'nice' is broken. It's very much broken at a conceptual and technical design angle (absolute priority levels, no fairness), but it's broken also from a psychological and practical angle (ie expecting people to manually do extra work is ridiculous and totally unrealistic)."
                  -http://lwn.net/Articles/418739/
                  lmao that metaphor linus said was genious. but really if nice is THAT useless why is it there? personally, i've used nice before and it works GREAT for me. for example, there was a year where i used screensavers for my background. screensavers can be somewhat cpu intensive, so i set the nice level to 19 (or maybe it was -19? i forget at this point). then, whenever another program demanded cpu power, the screensaver would get really choppy and unresponsive while the program had little to no slowdown at all.

                  based on my experience, nice isn't broken at all, it works great. thats why this new thing is confusing to me because if i were to use it in my example, i don't see how anything would change at all.

                  Comment


                  • #10
                    Originally posted by poelzi View Post
                    BTW: I was able to write a rule in one evening that protects the computer form swap of death, at least when a process is eating all your memory. For the case that a group of small processes is tearing you down, it does not work yet, but rules for that are in the pipeline :-)
                    Wow... sounds great! Can you tell us how long until we could try this?

                    Comment

                    Working...
                    X