Originally posted by V!NCENT
View Post
Announcement
Collapse
No announcement yet.
The ~200 Line Linux Kernel Patch That Does Wonders
Collapse
X
-
Originally posted by V!NCENT View PostHaha, that reminds me when I was with my dad when he bought a new computer;
PC_store_owner: "Hello. How can I help you?"
Dad: "I'm in the market for a new computer."
PC_store_owner: "Well we have bla bla bla"
Dad: "Just give me the best of the best computer you have"
PC_store_owner: "OK, the best computer we have here has x, y, z and a ten megabyte harddisk."
Dad: "Ten megabytes, is this the best of the best?"
PC_store_onwer: "Sir, ten... megabyte. You'll never get to fill it up".
You might want to read this:
One of the mantras of BFS is that it has very little in the way of tunables and should require no input on the part of the user to get both ...
Trying to have low-latency under ridiculous loads is not really smart:
"The mainline kernel seems intent on continually readdressing latency under load on a desktop as though that's some holy grail. Lately the make -j10 load on uniprocessor workload has been used as the benchmark. What they're finding, not surprisingly, is that the lower you aim your latencies, the smoother the desktop will continue to feel at the higher loads, and trying to find some "optimum" value where latency will still be good without sacrificing throughput too much. Why 10? Why not 100? How about 1000? Why choose some arbitrary upper figure to tune to? Why not just accept that overload is overload and that latency is going to suffer and not damage throughput to try and contain it?"
Comment
-
"This explains that you have absolutely no clue what "load" means It's not an absolute quantity like RAM or hard disk space"
actually it IS an absolute quantity, no matter what OS etc you advocate, right down at the core,you have a fixed amount of micro ticks in which to perform a given task, you can not perform more than this total load in a given time slice.
this is exactly where your scheduling for load comes in to it, you divide these limited ticks of time to move different parts of the sequentially run code to give the appearance of multitasking and responsiveness nothing more, so it is a absolute quantity.
Comment
-
now If you want to put the case that many coders today are lazy and dont try and optimise every part of the system and core libs *glibc etc to get as close to this absolute quantity then fine, clearly many dont and so you get things like this patch...
*http://www.freevec.org/content/commentsconclusions
"....
Finally, with regard to glibc performance, even if we take into account that some common routines are optimised (like strlen(), memcpy(), memcmp() plus some more), most string functions are NOT optimised. Not only that, glibc only includes reference implementations that perform the operations one-byte-at-a-time! How's that for inefficient? We're not talking about dummy unused joke functions here like memfrob(), but really important string and memory functions that are used pretty much everywhere, like strcmp(), strncmp(), strncpy(), etc.
In times where power consumption has become so much important, I would think that the first thing to do to save power is optimise the software, and what better place to start than the core parts of an operating system? I can't speak for the kernel -though I'm sure it's very optimised actually- but having looked at the glibc code extensively the past years, I can say that it's grossly unoptimised, so much it hurts."
Comment
-
Originally posted by jukk View PostBut hey, swedes are supposed to know this...
Man ?r v?l ocks? lite stolt som finlandssvensk ?ver Linus
Jo, ?ven om jag inte kan kalla Linus Torvalds f?r svensk s? kan jag ju k?ra p? med 'Nordisk stolthet'
Comment
-
Originally posted by XorEaxEax View PostHur m?nga svenskar ?r vi h?r p? Phoronix egentligen?
[And am bad with speaking Swedish but slightly less bad with Norwegian , but you all seem to know all the scandic languages due to the similarities ]Michael Larabel
https://www.michaellarabel.com/
Comment
-
Originally posted by RealNC View PostWhy not just accept that overload is overload and that latency is going to suffer and not damage throughput to try and contain it?"
Comment
-
Originally posted by popper View Post"This explains that you have absolutely no clue what "load" means It's not an absolute quantity like RAM or hard disk space"
actually it IS an absolute quantity, no matter what OS etc you advocate, right down at the core,you have a fixed amount of micro ticks in which to perform a given task, you can not perform more than this total load in a given time slice.
this is exactly where your scheduling for load comes in to it, you divide these limited ticks of time to move different parts of the sequentially run code to give the appearance of multitasking and responsiveness nothing more, so it is a absolute quantity.
Testing -j64 on a six-core doesn't mean you get the same results as -j64 on a Pentium 4. But you guys seem to think that this is indeed the case.
Comment
-
Originally posted by RealNC View PostNo, it is not. The instructions executed by 100% load in one CPU does not mean 100% load on another. But 100MB HD space on one system is still 100MB on another.
Testing -j64 on a six-core doesn't mean you get the same results as -j64 on a Pentium 4. But you guys seem to think that this is indeed the case.
OC 100MB HD used space on one system is NOT 100MB on another or you would have 200MB, twice as many places used in their fixed absolute quantity because you now have 2 not one storing twice as much.
I say again, you cant get more than an absolute quantity out of a given CPU , add another one and you get twice the amount of work time for twice the micro ticks, you can add more or design new ones to do more ticks per time-slice but its still an absolute quantity just a bigger Number.
Comment
Comment