Announcement

Collapse
No announcement yet.

Compilation Times, Binary Sizes For GCC 4.2 To GCC 4.8

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by duby229 View Post
    I've got a 955BE and compile time is like nothing. At most even on huge packages like the kernel or gtk or firefox it only takes a few minutes. And most other packages take less than a minute or even just a few seconds. I can't imagine that compile time is still much of a problem.

    My emerge -e world is over 700 packages and it only takes about an hour and a half.
    Yeah, Duby, I've got an undervolted i5 Sandy Bridge in my ultrabook. And until the 3.9 kernel hits [stable] in Arch, I have to compile all my own kernel builds and they take hours on this thing. So lucky for you for having a great CPU, but not everyones so lucky.
    All opinions are my own not those of my employer if you know who they are.

    Comment


    • #12
      That doesnt make any sense tho. The I5 sandybridge is faster clock for clock than my chip so even at a lower clock it should still be awesome. Really hours? I cant believe hours... Maybe half an hour at like 1GHz with only one core enabled. There's just no way it takes hours to compile a kernel on a modern chip like yours.

      Comment


      • #13
        Originally posted by duby229 View Post
        I've got a 955BE and compile time is like nothing. At most even on huge packages like the kernel or gtk or firefox it only takes a few minutes. And most other packages take less than a minute or even just a few seconds. I can't imagine that compile time is still much of a problem.

        My emerge -e world is over 700 packages and it only takes about an hour and a half.
        You're missing the point. If you spend 5 minutes recompiling a project every time you make a single change, you can easily spend over 50% of your day recompiling. At least when you are debugging issues, and constantly making small changes and then testing the results. Working on larger changes at a time reduces the problem.

        It's the entire reason the -O0 (no optimization) level even exists. Because time is more important to a developer than the resulting binary speed when you are debugging issues. I'd agree it's a virtual non-issue for release mode code, though.

        Comment


        • #14
          Originally posted by GreatEmerald View Post
          And Gentoo users.
          Lets not forget Gentoo developers.

          Comment


          • #15
            Originally posted by duby229 View Post
            That doesnt make any sense tho. The I5 sandybridge is faster clock for clock than my chip so even at a lower clock it should still be awesome. Really hours? I cant believe hours... Maybe half an hour at like 1GHz with only one core enabled. There's just no way it takes hours to compile a kernel on a modern chip like yours.
            make -j4 on a i5 dualcore at 1.6ghz can take hours, yes. It sucks -.-
            All opinions are my own not those of my employer if you know who they are.

            Comment


            • #16
              Guys, don't talk about compile time not being important unless you're a developer and what you're working on is a multi-GB sized project (yes, binary executables, libraries and such, and no data/source code). I've spent one day just recompiling a tiny portion of a project for internal release and usage. About 40 MB (in release) in 8 different versions. Now imagine compiling something about 100 times larger than that, and luckily only in two versions. Obviously, a large portion of this is rarely rebuilt at all, but just copied from central storage, and you only rebuild as small portion of the software as possible, but still, there's a reason for having nightly builds and such. It saves a huge amount of time for developers.

              ... so yeah, compile time matters a lot.

              Oh, and I have a quadcore [email protected]. Stuff can be compiled fast, but still not at all fast enough.
              Last edited by AHSauge; 18 March 2013, 12:20 PM.

              Comment


              • #17
                Originally posted by Ericg View Post
                You are obviously NOT a developer. If I'm coding and I make 1 change and I re-compile to test it, I'd rather not have to think to myself "Well....I'm gonna be here for a while." For release builds code-speed may be more important, but for test builds? A 10-second kernel build? i would be beyond happy. Itd be like a free hardware upgrade to me haha
                It's quite obvious that we work differently.
                Do you seriously stare at your screen while the compiler is doing work?
                Or can't you write makefiles so that you have to rebuild everything every time you touch a file?
                Compiling is background work for me. I NEVER wait for compilation to finish. I do actual WORK while waiting.
                Every build system I have built or used will not rebuild stuff that does not need rebuilding.
                Even extremely large code projects will usually take a minute or so to generate new binary images if I want to try something out.
                Clean compilations I usually do overnight, lunch, breaks etc.

                GCC's compilation speed is not a problem.
                I'd still give an arm and an leg for something that is 10% faster on average, even if it's 10 times slower.
                Don't believe me? Ask anyone who is building or using anything that is computational intensive.

                Comment


                • #18
                  Originally posted by Ericg View Post
                  make -j4 on a i5 dualcore at 1.6ghz can take hours, yes. It sucks -.-
                  You're doing it wrong, -j4 is overkill for a dualcore (even with hyperthreading). -j3 would probably yeld better results. If you also have 1GB of RAM or less -j2 might be even faster.

                  Comment

                  Working...
                  X