Announcement

Collapse
No announcement yet.

Parallelizing GCC's Internals Continues To Be Worked On & Showing Promising Potential

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by ms178 View Post
    Welcome in the multi-core world, GCC
    gcc was in multi-core world with parallel invocations. multithreading compiler isn't easy and isn't as efficient (you can see, they never got 4x spedup on 4 threads). it's nice it is being done, but you are acting like it's last compiler to do it

    Comment


    • #22
      Originally posted by Clive McCarthy View Post
      The use of very large source files (anything above 64kB) is a clear indication of poor software design
      or attempt at doing more inter procedural optimizations
      and btw, compiler compiles not just source file, but source + all includes. it is often measured in megabytes.
      from slides "gimple-match.c: 100358 lines of C++ (GCC 10.0.0)"
      so they are basically trying to compile itself faster

      (you wouldn't find it in repo, it's autogenerated - is it poor or rich software design?)
      Last edited by pal666; 26 September 2019, 07:23 PM.

      Comment


      • #23
        Originally posted by atomsymbol
        C/C++ #include directives, in particular the current way of processing #include directives by C/C++ compilers, is poor software&system design.
        it is irrelevant for subj. includes are processed by fronted, subj is about optimizations in middle end. even with c++20 modules compiler will skip parsing sources, but will still have to do optimizations. and (surprise) optimizations are taking more time than parsing
        Last edited by pal666; 26 September 2019, 07:23 PM.

        Comment


        • #24
          Originally posted by pal666 View Post
          that's because clang was always producing slow programs quickly
          Sometimes that's what you need.

          Comment


          • #25
            Originally posted by jacob View Post
            Sometimes that's what you need.
            then don't enable optimizations
            Last edited by pal666; 26 September 2019, 07:23 PM.

            Comment


            • #26
              Originally posted by jacob View Post
              1.6x speedup is something that users will certainly notice. This is great news
              article mistakenly states they had 1.6 speedup. they had 1.09 speedup. 1.6 was extrapolation when/if they parallelize larger part of compiler with same success
              and even this 1.09 part isn't passing testsuite yet
              Last edited by pal666; 26 September 2019, 07:40 PM.

              Comment


              • #27
                Originally posted by atomsymbol

                ld.gold is several times faster than ld.bfd, but unfortunately ld.gold is making some packages fail to build.
                And lld is even faster still.

                Comment


                • #28
                  Originally posted by Clive McCarthy View Post
                  Michael, You write:
                  "One of the most interesting Google Summer of Code projects this year was the student effort to work on better parallelizing GCC's internals to deal with better performance particularly when dealing with very large source files. Fortunately -- given today's desktop CPUs even ramping up their core counts -- this parallel GCC effort is being continued."

                  The use of very large source files (anything above 64kB) is a clear indication of poor software design and lack of modularity. It should not be encouraged by speeding up the compiler. The linker needs to be fast but the front end is already capable of using all the cores one has available. Parallelizing something that can already be run in parallel is a waste of effort.

                  Clive.
                  There are many files of several MiBs in size in, say, LLVM codebase which include all tablegen-ed boilerplate machinery from CPU definitions.Big source files are only a problem if the developer is expected to read and maintain them.

                  Comment


                  • #29
                    I agree. I should have excluded auto-generated source code from my comment.

                    Comment


                    • #30
                      I very much like the idea of a faster linker. Building with GTK is fine, but linking takes ages (I joke, I was once familiar with clean builds taking overnight with the crew I managed).

                      Comment

                      Working...
                      X