Announcement

Collapse
No announcement yet.

ULatencyD Enters The Linux World

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • devius
    replied
    Originally posted by poelzi View Post
    Windows does not have better scheduling at all. The problem with the linux scheduler that he is just to fair when not adjusted. The windows scheduler is just unfair in general, at least up to vista. Fair in general is better, if it can be adjusted to what the user expects :-)
    I have to agree with Yfrwlf on this one. Even windows 2000 felt more "snapier" (that seems to be a popular word these days) than recent linux (until at least version .34). Just the fact that, when something heavy is going on in the background (high disk i/o, high cpu load) it takes a few seconds from the moment you click on something to it actually responding to the click speaks a lot about the responsiveness problem. This is something that apple got right: it must feel fast (even if it isn't).

    Leave a comment:


  • poelzi
    replied
    Originally posted by Yfrwlf View Post
    All I know is certain i/o tasks in particular get at least equal treatment, if not top priority it seems like sometimes, and bring the UI to a screeching halt in certain scenarios. At first I thought the same, that if I simply adjusted the niceness level of, say, all X.org operations over that of other operations like compiling or copying or whatever, that it would resolve that problem.
    IO scheduling is a different matter. One problem i noticed is to less file cache in memory. You can even get the feeling of swap of death without any swap :-)

    For real good io scheduling linux 2.6.36 got a cgroup blkio subsystem.
    Unfortunately it does not support deep hierarchy, so i need to run an own, flat mapping there.


    Originally posted by Yfrwlf View Post
    However, Linus' comment that niceness levels are absolute, and you may want things to be more dynamic than that and fine-grained. For instance, if Xorg started being mean, you wouldn't want all your other processes to be destroyed. That's my understanding of that anyway.
    If x dies, it dies and you are screwed, thing can stop that. What can be done is to make sure x will not get killed. I haven't added adjustment of the oom_adj value, but the api is already there. I plan to adjust the oom flags of important tasks so it will not likely be killed, when everything is lost :-)
    But i hope that oom killer will never fire at all.


    Originally posted by Yfrwlf View Post

    Also, I think this issue is more complex than just scheduling of jobs, and there seems to have been a lot of improvements in actually making things more able to run in parallel, or in other words I think the multitasking ability of Linux just got a whole lot better with the recent coding that has gone into it. Being also able to make it so GUI responsiveness and user interaction in general gets higher priority is just one of the improvements.

    Seriously, before all this, Linux had much worse multitasking capabilities than Windows 2000 did. Very happy to see this getting fixed.
    Windows does not have better scheduling at all. The problem with the linux scheduler that he is just to fair when not adjusted. The windows scheduler is just unfair in general, at least up to vista. Fair in general is better, if it can be adjusted to what the user expects :-)

    Leave a comment:


  • poelzi
    replied
    Originally posted by Larven View Post
    Wow... sounds great! Can you tell us how long until we could try this?
    You can run it. Feedback is very much welcome :-)
    Especially usecases that slow down your computer to a point of not nice to use ;-)

    Leave a comment:


  • Yfrwlf
    replied
    Originally posted by schmidtbag View Post
    lmao that metaphor linus said was genious. but really if nice is THAT useless why is it there? personally, i've used nice before and it works GREAT for me. for example, there was a year where i used screensavers for my background. screensavers can be somewhat cpu intensive, so i set the nice level to 19 (or maybe it was -19? i forget at this point). then, whenever another program demanded cpu power, the screensaver would get really choppy and unresponsive while the program had little to no slowdown at all.

    based on my experience, nice isn't broken at all, it works great. thats why this new thing is confusing to me because if i were to use it in my example, i don't see how anything would change at all.
    All I know is certain i/o tasks in particular get at least equal treatment, if not top priority it seems like sometimes, and bring the UI to a screeching halt in certain scenarios. At first I thought the same, that if I simply adjusted the niceness level of, say, all X.org operations over that of other operations like compiling or copying or whatever, that it would resolve that problem.

    However, Linus' comment that niceness levels are absolute, and you may want things to be more dynamic than that and fine-grained. For instance, if Xorg started being mean, you wouldn't want all your other processes to be destroyed. That's my understanding of that anyway.

    Also, I think this issue is more complex than just scheduling of jobs, and there seems to have been a lot of improvements in actually making things more able to run in parallel, or in other words I think the multitasking ability of Linux just got a whole lot better with the recent coding that has gone into it. Being also able to make it so GUI responsiveness and user interaction in general gets higher priority is just one of the improvements.

    Seriously, before all this, Linux had much worse multitasking capabilities than Windows 2000 did. Very happy to see this getting fixed.

    Leave a comment:


  • Larven
    replied
    Originally posted by poelzi View Post
    BTW: I was able to write a rule in one evening that protects the computer form swap of death, at least when a process is eating all your memory. For the case that a group of small processes is tearing you down, it does not work yet, but rules for that are in the pipeline :-)
    Wow... sounds great! Can you tell us how long until we could try this?

    Leave a comment:


  • schmidtbag
    replied
    Originally posted by Wyatt View Post
    The benefit is this has the potential to actually work.

    Nice level probably doesn't do what you're thinking of, if you're talking about it in the context of user-visible latency. Adjusting timeslice lengths to be longer for preferred programs creates this situation where other threads have a shorter slice. As timeslices approach zero, cache thrash, and scehduling overhead approach infinity.

    Also,Linus chimes in:

    "There really isn't anything to fix. 'nice' is what it is. It's a
    simple legacy interface to scheduler priority. The fact that it's also
    almost totally useless is irrelevant. It's like male nipples. We
    wouldn't be better off lactating, and they look like some odd wart
    that doesn't do much good. But it would be worse to remove it."
    -http://article.gmane.org/gmane.linux.kernel/1071951

    "But the fundamental issue is that 'nice' is broken. It's very much broken at a conceptual and technical design angle (absolute priority levels, no fairness), but it's broken also from a psychological and practical angle (ie expecting people to manually do extra work is ridiculous and totally unrealistic)."
    -http://lwn.net/Articles/418739/
    lmao that metaphor linus said was genious. but really if nice is THAT useless why is it there? personally, i've used nice before and it works GREAT for me. for example, there was a year where i used screensavers for my background. screensavers can be somewhat cpu intensive, so i set the nice level to 19 (or maybe it was -19? i forget at this point). then, whenever another program demanded cpu power, the screensaver would get really choppy and unresponsive while the program had little to no slowdown at all.

    based on my experience, nice isn't broken at all, it works great. thats why this new thing is confusing to me because if i were to use it in my example, i don't see how anything would change at all.

    Leave a comment:


  • Wyatt
    replied
    Originally posted by schmidtbag View Post
    so.... what exactly is different about this as compared to changing the nice level of programs? i don't understand what the benefit of this is
    The benefit is this has the potential to actually work.

    Nice level probably doesn't do what you're thinking of, if you're talking about it in the context of user-visible latency. Adjusting timeslice lengths to be longer for preferred programs creates this situation where other threads have a shorter slice. As timeslices approach zero, cache thrash, and scehduling overhead approach infinity.

    Also,Linus chimes in:

    "There really isn't anything to fix. 'nice' is what it is. It's a
    simple legacy interface to scheduler priority. The fact that it's also
    almost totally useless is irrelevant. It's like male nipples. We
    wouldn't be better off lactating, and they look like some odd wart
    that doesn't do much good. But it would be worse to remove it."
    -http://article.gmane.org/gmane.linux.kernel/1071951

    "But the fundamental issue is that 'nice' is broken. It's very much broken at a conceptual and technical design angle (absolute priority levels, no fairness), but it's broken also from a psychological and practical angle (ie expecting people to manually do extra work is ridiculous and totally unrealistic)."
    -http://lwn.net/Articles/418739/

    Leave a comment:


  • poelzi
    replied
    why ulatencyd

    Hi,

    the first reason why nothing like this happend before was, that there was just nothing you can do. Cgroups is the first kernel interface that gives userspace enough power to control kernel behavior in a way that gives good results. In the good old ages there was a renice daemon, but this can't protect you enough in rough cases.
    Heuristic analyzing of the system is something that should never be in the kernel. In fact, everything that you can put in userspace without too much cost on runtime, should be put there.
    The reason I don't want it in init (systemd, upstart, etc) is, that init is the most important program of the system. In my opinion should it be as slick as possible, especially heuristics are something that really don't belong there.
    But of course, I agree that a good interface between the init daemon and ulatencyd will benefit. I just haven't implement a dbus interface yet, and no good ideas how it should look like.

    About systemd: I'm a little bit unsure about their use of cgroups and the main purpose seems to make sure they can kill a daemon completely which seems a little bit awkward to me.

    BTW: I was able to write a rule in one evening that protects the computer form swap of death, at least when a process is eating all your memory. For the case that a group of small processes is tearing you down, it does not work yet, but rules for that are in the pipeline :-)

    Leave a comment:


  • RealNC
    replied
    Originally posted by devius View Post
    After so many years of nobody caring about latency, suddenly we're surround by projects aiming to improve it. What the hell is going on?
    They got a taste of BFS and liked it

    Leave a comment:


  • schmidtbag
    replied
    so.... what exactly is different about this as compared to changing the nice level of programs? i don't understand what the benefit of this is

    Leave a comment:

Working...
X