Originally posted by poelzi
View Post
Announcement
Collapse
No announcement yet.
ULatencyD Enters The Linux World
Collapse
X
-
-
Originally posted by Yfrwlf View PostAll I know is certain i/o tasks in particular get at least equal treatment, if not top priority it seems like sometimes, and bring the UI to a screeching halt in certain scenarios. At first I thought the same, that if I simply adjusted the niceness level of, say, all X.org operations over that of other operations like compiling or copying or whatever, that it would resolve that problem.
For real good io scheduling linux 2.6.36 got a cgroup blkio subsystem.
Unfortunately it does not support deep hierarchy, so i need to run an own, flat mapping there.
Originally posted by Yfrwlf View PostHowever, Linus' comment that niceness levels are absolute, and you may want things to be more dynamic than that and fine-grained. For instance, if Xorg started being mean, you wouldn't want all your other processes to be destroyed. That's my understanding of that anyway.
But i hope that oom killer will never fire at all.
Originally posted by Yfrwlf View Post
Also, I think this issue is more complex than just scheduling of jobs, and there seems to have been a lot of improvements in actually making things more able to run in parallel, or in other words I think the multitasking ability of Linux just got a whole lot better with the recent coding that has gone into it. Being also able to make it so GUI responsiveness and user interaction in general gets higher priority is just one of the improvements.
Seriously, before all this, Linux had much worse multitasking capabilities than Windows 2000 did. Very happy to see this getting fixed.
Leave a comment:
-
Originally posted by schmidtbag View Postlmao that metaphor linus said was genious. but really if nice is THAT useless why is it there? personally, i've used nice before and it works GREAT for me. for example, there was a year where i used screensavers for my background. screensavers can be somewhat cpu intensive, so i set the nice level to 19 (or maybe it was -19? i forget at this point). then, whenever another program demanded cpu power, the screensaver would get really choppy and unresponsive while the program had little to no slowdown at all.
based on my experience, nice isn't broken at all, it works great. thats why this new thing is confusing to me because if i were to use it in my example, i don't see how anything would change at all.
However, Linus' comment that niceness levels are absolute, and you may want things to be more dynamic than that and fine-grained. For instance, if Xorg started being mean, you wouldn't want all your other processes to be destroyed. That's my understanding of that anyway.
Also, I think this issue is more complex than just scheduling of jobs, and there seems to have been a lot of improvements in actually making things more able to run in parallel, or in other words I think the multitasking ability of Linux just got a whole lot better with the recent coding that has gone into it. Being also able to make it so GUI responsiveness and user interaction in general gets higher priority is just one of the improvements.
Seriously, before all this, Linux had much worse multitasking capabilities than Windows 2000 did. Very happy to see this getting fixed.
Leave a comment:
-
Originally posted by poelzi View PostBTW: I was able to write a rule in one evening that protects the computer form swap of death, at least when a process is eating all your memory. For the case that a group of small processes is tearing you down, it does not work yet, but rules for that are in the pipeline :-)
Leave a comment:
-
Originally posted by Wyatt View PostThe benefit is this has the potential to actually work.
Nice level probably doesn't do what you're thinking of, if you're talking about it in the context of user-visible latency. Adjusting timeslice lengths to be longer for preferred programs creates this situation where other threads have a shorter slice. As timeslices approach zero, cache thrash, and scehduling overhead approach infinity.
Also,Linus chimes in:
"There really isn't anything to fix. 'nice' is what it is. It's a
simple legacy interface to scheduler priority. The fact that it's also
almost totally useless is irrelevant. It's like male nipples. We
wouldn't be better off lactating, and they look like some odd wart
that doesn't do much good. But it would be worse to remove it."
-http://article.gmane.org/gmane.linux.kernel/1071951
"But the fundamental issue is that 'nice' is broken. It's very much broken at a conceptual and technical design angle (absolute priority levels, no fairness), but it's broken also from a psychological and practical angle (ie expecting people to manually do extra work is ridiculous and totally unrealistic)."
-http://lwn.net/Articles/418739/
based on my experience, nice isn't broken at all, it works great. thats why this new thing is confusing to me because if i were to use it in my example, i don't see how anything would change at all.
Leave a comment:
-
Originally posted by schmidtbag View Postso.... what exactly is different about this as compared to changing the nice level of programs? i don't understand what the benefit of this is
Nice level probably doesn't do what you're thinking of, if you're talking about it in the context of user-visible latency. Adjusting timeslice lengths to be longer for preferred programs creates this situation where other threads have a shorter slice. As timeslices approach zero, cache thrash, and scehduling overhead approach infinity.
Also,Linus chimes in:
"There really isn't anything to fix. 'nice' is what it is. It's a
simple legacy interface to scheduler priority. The fact that it's also
almost totally useless is irrelevant. It's like male nipples. We
wouldn't be better off lactating, and they look like some odd wart
that doesn't do much good. But it would be worse to remove it."
-http://article.gmane.org/gmane.linux.kernel/1071951
"But the fundamental issue is that 'nice' is broken. It's very much broken at a conceptual and technical design angle (absolute priority levels, no fairness), but it's broken also from a psychological and practical angle (ie expecting people to manually do extra work is ridiculous and totally unrealistic)."
-http://lwn.net/Articles/418739/
Leave a comment:
-
why ulatencyd
Hi,
the first reason why nothing like this happend before was, that there was just nothing you can do. Cgroups is the first kernel interface that gives userspace enough power to control kernel behavior in a way that gives good results. In the good old ages there was a renice daemon, but this can't protect you enough in rough cases.
Heuristic analyzing of the system is something that should never be in the kernel. In fact, everything that you can put in userspace without too much cost on runtime, should be put there.
The reason I don't want it in init (systemd, upstart, etc) is, that init is the most important program of the system. In my opinion should it be as slick as possible, especially heuristics are something that really don't belong there.
But of course, I agree that a good interface between the init daemon and ulatencyd will benefit. I just haven't implement a dbus interface yet, and no good ideas how it should look like.
About systemd: I'm a little bit unsure about their use of cgroups and the main purpose seems to make sure they can kill a daemon completely which seems a little bit awkward to me.
BTW: I was able to write a rule in one evening that protects the computer form swap of death, at least when a process is eating all your memory. For the case that a group of small processes is tearing you down, it does not work yet, but rules for that are in the pipeline :-)
Leave a comment:
-
so.... what exactly is different about this as compared to changing the nice level of programs? i don't understand what the benefit of this is
Leave a comment:
Leave a comment: