Originally posted by duby229
View Post
Announcement
Collapse
No announcement yet.
Compilation Times, Binary Sizes For GCC 4.2 To GCC 4.8
Collapse
X
-
All opinions are my own not those of my employer if you know who they are.
-
That doesnt make any sense tho. The I5 sandybridge is faster clock for clock than my chip so even at a lower clock it should still be awesome. Really hours? I cant believe hours... Maybe half an hour at like 1GHz with only one core enabled. There's just no way it takes hours to compile a kernel on a modern chip like yours.
Comment
-
Originally posted by duby229 View PostI've got a 955BE and compile time is like nothing. At most even on huge packages like the kernel or gtk or firefox it only takes a few minutes. And most other packages take less than a minute or even just a few seconds. I can't imagine that compile time is still much of a problem.
My emerge -e world is over 700 packages and it only takes about an hour and a half.
It's the entire reason the -O0 (no optimization) level even exists. Because time is more important to a developer than the resulting binary speed when you are debugging issues. I'd agree it's a virtual non-issue for release mode code, though.
Comment
-
Originally posted by duby229 View PostThat doesnt make any sense tho. The I5 sandybridge is faster clock for clock than my chip so even at a lower clock it should still be awesome. Really hours? I cant believe hours... Maybe half an hour at like 1GHz with only one core enabled. There's just no way it takes hours to compile a kernel on a modern chip like yours.All opinions are my own not those of my employer if you know who they are.
Comment
-
Guys, don't talk about compile time not being important unless you're a developer and what you're working on is a multi-GB sized project (yes, binary executables, libraries and such, and no data/source code). I've spent one day just recompiling a tiny portion of a project for internal release and usage. About 40 MB (in release) in 8 different versions. Now imagine compiling something about 100 times larger than that, and luckily only in two versions. Obviously, a large portion of this is rarely rebuilt at all, but just copied from central storage, and you only rebuild as small portion of the software as possible, but still, there's a reason for having nightly builds and such. It saves a huge amount of time for developers.
... so yeah, compile time matters a lot.
Oh, and I have a quadcore [email protected]. Stuff can be compiled fast, but still not at all fast enough.Last edited by AHSauge; 18 March 2013, 12:20 PM.
Comment
-
Originally posted by Ericg View PostYou are obviously NOT a developer. If I'm coding and I make 1 change and I re-compile to test it, I'd rather not have to think to myself "Well....I'm gonna be here for a while." For release builds code-speed may be more important, but for test builds? A 10-second kernel build? i would be beyond happy. Itd be like a free hardware upgrade to me haha
Do you seriously stare at your screen while the compiler is doing work?
Or can't you write makefiles so that you have to rebuild everything every time you touch a file?
Compiling is background work for me. I NEVER wait for compilation to finish. I do actual WORK while waiting.
Every build system I have built or used will not rebuild stuff that does not need rebuilding.
Even extremely large code projects will usually take a minute or so to generate new binary images if I want to try something out.
Clean compilations I usually do overnight, lunch, breaks etc.
GCC's compilation speed is not a problem.
I'd still give an arm and an leg for something that is 10% faster on average, even if it's 10 times slower.
Don't believe me? Ask anyone who is building or using anything that is computational intensive.
Comment
-
Originally posted by Ericg View Postmake -j4 on a i5 dualcore at 1.6ghz can take hours, yes. It sucks -.-
Comment
Comment