Now let's say you wanted to use an app that needs most of your memory, like GTA IV if it was ported, you will run out, which is ridiculous and wrong for DE to waste that much memory. No my memory is not cheap, I'm still using old Windond BH-5 modules that run at 2-2-2-0 @ 3.45v, you can't upgrade that or put more sticks since chipset will not handle it and become unstable. In my book DE the max for i386 should be at most 100MB, and for x86_64 150MB. Maybe that's the reason why I still use XP. Win32 FTW!
I would see the compromise for using that much memory if those shitty DE gave the responsiveness of what Windows gives. Not even Nvidia drivers help or tweaked SSD disk with unregressed file systems. What sucks even more is that KDE is dead slow and sucks ass. Starting a text editor or calculator takes a 1.5s delay? To do what? WTF?
But let me ask you a question: Have you ever looked into the source code of some Linux executable or library? I have, and from what I have seen I can tell you that the amount of inefficiency in common Linux programs is breathtaking. Seriously, if you think about the algorithms used there and compare them to "ideal" implementations, the result is shocking. The number of totally pointlessly burned CPU cycles is massive.
Got any links, or any example programs to look at?
Quite often what may seem to be inefficient is actually efficient at runtime, or is done a certain way for a reason, or contains an awful lot of error handling.
Certainly this isn't always the case, and a good many tools have been rewritten, but just bear it in mind when looking through things.
Funny, Phoronix released this bench just after I posted this query for a bench..
OK, KDE uses more memory than other DEs. Don't know what to think about it. Even if these were run on *buntu systems, where Kubuntu is not known to be the best KDE, I think there must be an explanation to be given for the gaps show for battery consumption and temperature, which is very important. Memory usage ... well it also depends on the way the DE itself works and its conception (shared memory, etc.).
But at least it may lead to my initial query.
I haven't found a way to contact Phoronix to submit this new subject for a bench article.
On my system with KDE installed, it is using about 190MB of RAM when I load to the desktop. On a virtual machine, which also has KDE installed, I measured it using 150MB of RAM at the desktop. This was with Konsole running. I run a lightweight KDE desktop on my Gentoo Linux machines with things like background indexing off. I suspect that the Ubuntu test machine did not have this sort of option. It would be nice to see these same tests done with such features off.
Got any links, or any example programs to look at?
Speaking of Qt 4.6.2, you can try this test-case: Create a trivial GUI application and all it does is to paint a rectangle with a size of the whole window. Print to the console the time it takes to paint the rectangle. Maximize the window to full-screen. Use Alt+Tab to force a repaint.
Now, execute it like this to use the raster engine:
I have notebook with an Intel CPU which supports MMX/SSE2. So Qt will use a MMX/SSE2-based routine in case 1.
Now take a guess: Which is faster? 1 or 2?
Well, on my notebook, disabling MMX/SSE results in faster drawing of the rectangle (by 30%). You might think, Qt has support for those fancy MMX/SSE2 instructions, so you think you are safe because Qt will automatically use the implementation most appropriate for your CPU - well guess what, you are not safe because the supposedly better version is actually slower!!!
(That was on my notebook CPU, other CPU's may behave differently).
Originally Posted by mirv
Quite often what may seem to be inefficient is actually efficient at runtime, or is done a certain way for a reason, or contains an awful lot of error handling. Certainly this isn't always the case, and a good many tools have been rewritten, but just bear it in mind when looking through things.
I was talking about actual usage. It wasn't "theoretical".
Patches welcome, no? If you have the time to test how an ideal ls performs, you've already done the work.
1. It is not as simple as it sounds. The time required to actually implement a patch may be much longer than the time to pinpoint what the bottleneck is.
2. Last week, I submitted a patch for BASH 4.1. Maybe the maintainer will include it into BASH, or not, I don't know. In case BASH is parsing a lot of scripts, the patch makes it faster by 25% or something. I think, further performance improvements to BASH are certainly possible, but I find it unlikely that I will submit more patches. I don't like BASH in general, I don't like its implementation, and I don't like to program in C. (He, that was a nice sentence.)
I don't know what you think about "the state of Linux applications", but I am somewhat disappointed. But on the other hand, I am not disputing that many of them are working "just fine".
Well, great many people come here to complain about vanilla sources, etc. They just can't understand that these tests are not for them.
Point is: it doesn't matter if you can tweak an app to use half the ram of the default one or run 3x faster if you need to compile it from source and read about the optimizations for long hours. I dare to say that >90% of linux users won't care about that, just pick the one which is better with its defaults.
Originally Posted by energyman
And if that test program was gtk based, the results are skewed and crap anyway.
If I understood correctly the tests were started from terminal, so I assume no gui was started. Consequently, I think no additional gtk libraries were loaded, so your statement is not true.
I might be wrong here, I'm not familiar with pts...
Great comparison Michael. I think Lubuntu is going to be a great lightweight distro. Also it's good to know that GNOME devs actually understand that memory is a shared resource. Unlike the fglrx and KDE devs who either think they can take it all, or that memory is an infinite resource.