Announcement

Collapse
No announcement yet.

Parallelizing GCC's Internals Continues To Be Worked On & Showing Promising Potential

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Venemo View Post
    How does this affect make's -j argument? If every GCC process can now use 4 threads, does that mean that it's better to use fewer jobs with make now?
    It is a work in progress project. So you can't use it now.

    My guess is that it will be configurable.

    Comment


    • #12
      Originally posted by Venemo View Post
      How does this affect make's -j argument? If every GCC process can now use 4 threads, does that mean that it's better to use fewer jobs with make now?
      I was wondering the same thing with "Would that make -l better than -j?"

      Lately I've managed to get some projects to compile a hair faster on my 16 thread system by changing "make -j16" to "make -j32 -l16" -- doubled my thread count and set the load to try to max out 16 threads so each of my threads should always have a job queued up and ready to go. Like, a 22 minute kernel compile finishes in 20 or 21 minutes.

      Comment


      • #13
        Originally posted by skeevy420 View Post

        I was wondering the same thing with "Would that make -l better than -j?"

        Lately I've managed to get some projects to compile a hair faster on my 16 thread system by changing "make -j16" to "make -j32 -l16" -- doubled my thread count and set the load to try to max out 16 threads so each of my threads should always have a job queued up and ready to go. Like, a 22 minute kernel compile finishes in 20 or 21 minutes.
        I wonder if the kernel compile, on a ram disk using that trick might not get even faster on the new EPYC machine of Phoronix

        Comment


        • #14
          Originally posted by tchiwam View Post

          I wonder if the kernel compile, on a ram disk using that trick might not get even faster on the new EPYC machine of Phoronix
          Already is, but mine's an 8 year old Westmere box with ECC-DDR3-1333, so, yeah . I could probably up the jobs number because I have the ram to spare (more jobs queued = more ram needed), but I doubt I'd see any other minimal gains on this older system.

          Unrelated to this but ram disk turned a gear, but what I'd like to do is to put a 12GB lz4 ram disk on each of my 8GB sticks of ram to see if it's possible to get 50% more ram for free.

          Comment


          • #15
            Originally posted by carewolf View Post

            It has been years since clang was faster than gcc. They ended up being equally fast around the time the optimizations started being comparable.
            Yeah, these days it's mostly in linker where LLVM compilation stack runs circles around GNU stack.

            Comment


            • #16
              Originally posted by tchiwam View Post
              I wonder if the kernel compile, on a ram disk using that trick might not get even faster on the new EPYC machine of Phoronix
              On my 32G ram rizen 7 (yes just 16 threads) make -jdeb-pkg to build Linux kernel pushes my poor system into consuming in the middle of the process 2-4G of swap.

              More threads does equal more ram usage. I do question if this parallelizing will help.

              Comment


              • #17
                Michael, You write:
                "One of the most interesting Google Summer of Code projects this year was the student effort to work on better parallelizing GCC's internals to deal with better performance particularly when dealing with very large source files. Fortunately -- given today's desktop CPUs even ramping up their core counts -- this parallel GCC effort is being continued."

                The use of very large source files (anything above 64kB) is a clear indication of poor software design and lack of modularity. It should not be encouraged by speeding up the compiler. The linker needs to be fast but the front end is already capable of using all the cores one has available. Parallelizing something that can already be run in parallel is a waste of effort.

                Clive.

                Comment


                • #18
                  Originally posted by jacob View Post
                  compilation speed is one area where GCC has always been lagging behind Clang.
                  that's because clang was always producing slow programs quickly

                  Comment


                  • #19
                    Originally posted by discordian View Post
                    Outside of linking, if you compile 4 files on 4 cores you get a near 4 times speedup.
                    but you need 4 files for that. most often you compile after editing one non-header file, so compiler has only one file to recompile.

                    Comment


                    • #20
                      Originally posted by Venemo View Post
                      How does this affect make's -j argument? If every GCC process can now use 4 threads, does that mean that it's better to use fewer jobs with make now?
                      i hope it will plug into jobserver like -flto=jobserver
                      they have in todo line "Communicate with Make for automatic threading"
                      Last edited by pal666; 26 September 2019, 07:38 PM.

                      Comment

                      Working...
                      X