Announcement

Collapse
No announcement yet.

Parallelizing GCC's Internals Continues To Be Worked On & Showing Promising Potential

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • pal666
    replied
    Originally posted by jacob View Post
    1.6x speedup is something that users will certainly notice. This is great news
    article mistakenly states they had 1.6 speedup. they had 1.09 speedup. 1.6 was extrapolation when/if they parallelize larger part of compiler with same success
    and even this 1.09 part isn't passing testsuite yet
    Last edited by pal666; 26 September 2019, 07:40 PM.

    Leave a comment:


  • pal666
    replied
    Originally posted by jacob View Post
    Sometimes that's what you need.
    then don't enable optimizations
    Last edited by pal666; 26 September 2019, 07:23 PM.

    Leave a comment:


  • jacob
    replied
    Originally posted by pal666 View Post
    that's because clang was always producing slow programs quickly
    Sometimes that's what you need.

    Leave a comment:


  • pal666
    replied
    Originally posted by atomsymbol
    C/C++ #include directives, in particular the current way of processing #include directives by C/C++ compilers, is poor software&system design.
    it is irrelevant for subj. includes are processed by fronted, subj is about optimizations in middle end. even with c++20 modules compiler will skip parsing sources, but will still have to do optimizations. and (surprise) optimizations are taking more time than parsing
    Last edited by pal666; 26 September 2019, 07:23 PM.

    Leave a comment:


  • pal666
    replied
    Originally posted by Clive McCarthy View Post
    The use of very large source files (anything above 64kB) is a clear indication of poor software design
    or attempt at doing more inter procedural optimizations
    and btw, compiler compiles not just source file, but source + all includes. it is often measured in megabytes.
    from slides "gimple-match.c: 100358 lines of C++ (GCC 10.0.0)"
    so they are basically trying to compile itself faster

    (you wouldn't find it in repo, it's autogenerated - is it poor or rich software design?)
    Last edited by pal666; 26 September 2019, 07:23 PM.

    Leave a comment:


  • pal666
    replied
    Originally posted by ms178 View Post
    Welcome in the multi-core world, GCC
    gcc was in multi-core world with parallel invocations. multithreading compiler isn't easy and isn't as efficient (you can see, they never got 4x spedup on 4 threads). it's nice it is being done, but you are acting like it's last compiler to do it

    Leave a comment:


  • pal666
    replied
    Originally posted by Venemo View Post
    How does this affect make's -j argument? If every GCC process can now use 4 threads, does that mean that it's better to use fewer jobs with make now?
    i hope it will plug into jobserver like -flto=jobserver
    they have in todo line "Communicate with Make for automatic threading"
    Last edited by pal666; 26 September 2019, 07:38 PM.

    Leave a comment:


  • pal666
    replied
    Originally posted by discordian View Post
    Outside of linking, if you compile 4 files on 4 cores you get a near 4 times speedup.
    but you need 4 files for that. most often you compile after editing one non-header file, so compiler has only one file to recompile.

    Leave a comment:


  • pal666
    replied
    Originally posted by jacob View Post
    compilation speed is one area where GCC has always been lagging behind Clang.
    that's because clang was always producing slow programs quickly

    Leave a comment:


  • Clive McCarthy
    replied
    Michael, You write:
    "One of the most interesting Google Summer of Code projects this year was the student effort to work on better parallelizing GCC's internals to deal with better performance particularly when dealing with very large source files. Fortunately -- given today's desktop CPUs even ramping up their core counts -- this parallel GCC effort is being continued."

    The use of very large source files (anything above 64kB) is a clear indication of poor software design and lack of modularity. It should not be encouraged by speeding up the compiler. The linker needs to be fast but the front end is already capable of using all the cores one has available. Parallelizing something that can already be run in parallel is a waste of effort.

    Clive.

    Leave a comment:

Working...
X