Announcement

Collapse
No announcement yet.

Parallelizing GCC's Internals Continues To Be Worked On & Showing Promising Potential

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Venemo View Post
    How does this affect make's -j argument? If every GCC process can now use 4 threads, does that mean that it's better to use fewer jobs with make now?
    It is a work in progress project. So you can't use it now.

    My guess is that it will be configurable.

    Comment


    • #12
      Originally posted by Venemo View Post
      How does this affect make's -j argument? If every GCC process can now use 4 threads, does that mean that it's better to use fewer jobs with make now?
      I was wondering the same thing with "Would that make -l better than -j?"

      Lately I've managed to get some projects to compile a hair faster on my 16 thread system by changing "make -j16" to "make -j32 -l16" -- doubled my thread count and set the load to try to max out 16 threads so each of my threads should always have a job queued up and ready to go. Like, a 22 minute kernel compile finishes in 20 or 21 minutes.

      Comment


      • #13
        Originally posted by skeevy420 View Post

        I was wondering the same thing with "Would that make -l better than -j?"

        Lately I've managed to get some projects to compile a hair faster on my 16 thread system by changing "make -j16" to "make -j32 -l16" -- doubled my thread count and set the load to try to max out 16 threads so each of my threads should always have a job queued up and ready to go. Like, a 22 minute kernel compile finishes in 20 or 21 minutes.
        I wonder if the kernel compile, on a ram disk using that trick might not get even faster on the new EPYC machine of Phoronix

        Comment


        • #14
          Originally posted by tchiwam View Post

          I wonder if the kernel compile, on a ram disk using that trick might not get even faster on the new EPYC machine of Phoronix
          Already is, but mine's an 8 year old Westmere box with ECC-DDR3-1333, so, yeah . I could probably up the jobs number because I have the ram to spare (more jobs queued = more ram needed), but I doubt I'd see any other minimal gains on this older system.

          Unrelated to this but ram disk turned a gear, but what I'd like to do is to put a 12GB lz4 ram disk on each of my 8GB sticks of ram to see if it's possible to get 50% more ram for free.

          Comment


          • #15
            Originally posted by carewolf View Post

            It has been years since clang was faster than gcc. They ended up being equally fast around the time the optimizations started being comparable.
            Yeah, these days it's mostly in linker where LLVM compilation stack runs circles around GNU stack.

            Comment


            • #16
              Originally posted by tchiwam View Post
              I wonder if the kernel compile, on a ram disk using that trick might not get even faster on the new EPYC machine of Phoronix
              On my 32G ram rizen 7 (yes just 16 threads) make -jdeb-pkg to build Linux kernel pushes my poor system into consuming in the middle of the process 2-4G of swap.

              More threads does equal more ram usage. I do question if this parallelizing will help.

              Comment


              • #17
                Michael, You write:
                "One of the most interesting Google Summer of Code projects this year was the student effort to work on better parallelizing GCC's internals to deal with better performance particularly when dealing with very large source files. Fortunately -- given today's desktop CPUs even ramping up their core counts -- this parallel GCC effort is being continued."

                The use of very large source files (anything above 64kB) is a clear indication of poor software design and lack of modularity. It should not be encouraged by speeding up the compiler. The linker needs to be fast but the front end is already capable of using all the cores one has available. Parallelizing something that can already be run in parallel is a waste of effort.

                Clive.

                Comment


                • #18
                  Originally posted by Clive McCarthy View Post
                  The use of very large source files (anything above 64kB) is a clear indication of poor software design and lack of modularity. It should not be encouraged by speeding up the compiler. The linker needs to be fast but the front end is already capable of using all the cores one has available. Parallelizing something that can already be run in parallel is a waste of effort.

                  Clive.
                  A certain string.cc is currently at 38 KiB, it might grow larger as time passes. I wouldn't call that poor software design.

                  The natural size of the C/C++ implementation of some ideas/concepts is larger than 100 KiB.

                  A lot of Linux software is very poor in handling incremental file changes in general because the concept of incremental changes isn't supported at the operating-system level. The process of editing C/C++ files is an example of incremental changes.

                  Large size of individual C/C++ files isn't poor design - C/C++ #include directives, in particular the current way of processing #include directives by C/C++ compilers, is poor software&system design.

                  Comment


                  • #19
                    Originally posted by nanonyme View Post
                    Yeah, these days it's mostly in linker where LLVM compilation stack runs circles around GNU stack.
                    ld.gold is several times faster than ld.bfd, but unfortunately ld.gold is making some packages fail to build.

                    Comment


                    • #20
                      Originally posted by jacob View Post
                      compilation speed is one area where GCC has always been lagging behind Clang.
                      that's because clang was always producing slow programs quickly

                      Comment

                      Working...
                      X