I rolled my own 2.6.35 kernel with this and the bfq patches
2.6.35-amd64-iowait-bfq #1 SMP PREEMPT
And the general responsiveness of the system under IO load is night and day compared to how it was before, copying a 3.7gig file from one of my NTFS drives (i know, i know, leftover from my switch from windows) used to bring the system to a halt, programs used to freeze up until it had finished etc.
Now, everything keeps going, and copying speed is just as good (I haven?t timed it to compare)
(this is on a Debian squeeze/testing base btw)
Announcement
Collapse
No announcement yet.
The Linux Desktop Responsiveness Patches Are Feeling Good
Collapse
X
-
Try to open more than the RAM in size text file /2-4 GB/ with gedit and it'll be enough. Happened to me already (:
Leave a comment:
-
If someone has the patches running, it would be interesting to see how it fares with this little test; (adjust MEM_SIZE to match your physical ram)
For a long time, this has managed to bring almost all my Linux-systems to a crawl, not surprising since it basically forces excessive swapping. My worry though, and the reason that I investigated it, is that occasionally some application is bound to go haywire and do something like this.
On some systems, the result is so bad, I can't even regain control of the system in reasonable time. IMHO, this is a plausible denial-of-service attack for any multi-user-system, or any service which can be triggered to excessive ram consumption.
/* Stupid program aiming to eat the swap alive */
#include <stdlib.h>
#include <string.h>
size_t MEM_SIZE = 1024*1024*1024;
float MEM_USE = 2;
size_t ALLOC_SIZE = 4*1024*1024;
int main() {
char **lists;
char *list;
size_t i,j;
size_t lists_len = (MEM_SIZE*MEM_USE) / ALLOC_SIZE;
lists = malloc(lists_len * sizeof(list));
for (i = 0; i < lists_len; i++)
lists[i] = malloc(ALLOC_SIZE);
while (1) {
for (i = 0; i < lists_len; i++)
memcpy(lists[i], lists[(i+1)%lists_len], ALLOC_SIZE);
}
return 0;
}
Leave a comment:
-
Originally posted by kernelOfTruth View Postindeed, but they are more tricky to backpart
at least starting with the 3rd patch I tried it and parts of the code are spread all over the file
since I'm not that experienced I'm waiting for someone else to backport (most favorably the zen-kernel devs )
Leave a comment:
-
Originally posted by kebabbert View PostI thought this was FUD, but it was true? Linux could not handle things without lagging sometimes? And now, is the situation better or does it still occur?
For example yesterday I put a 2.8 gb file on my local lighttp and downloaded it with aria2. This has to be the most I/O one would normally produce and my desktop was perfectely responsive...
Leave a comment:
-
2.6.22 (?) with Con's SD or RSD cpu scheduler was another masterpiece in terms of performance & responsibility
Leave a comment:
-
Originally posted by kebabbert View PostI thought this was FUD, but it was true? Linux could not handle things without lagging sometimes? And now, is the situation better or does it still occur?
from my observation it most probably got worse / introduced after 2.6.34 since that kernel was working excellent for me back then - even under heavy traffic
luckily with those lots of changes and the other improvements coming (e.g. reducing barrier writing, a unified slab (v3), and more) the future looks bright
Leave a comment:
-
I thought this was FUD, but it was true? Linux could not handle things without lagging sometimes? And now, is the situation better or does it still occur?
Leave a comment:
-
since I'm pretty busy - I unfortunately can't do the further backporting
anyways: the results already are pretty impressive considering that it's only a small amount of code that's been added / removed (those 2 patches)
Leave a comment:
-
Originally posted by DeepDayze View PostOh there's 7 patches that all together fix the issue?
at least starting with the 3rd patch I tried it and parts of the code are spread all over the file
since I'm not that experienced I'm waiting for someone else to backport (most favorably the zen-kernel devs )
Leave a comment:
Leave a comment: