Announcement

Collapse
No announcement yet.

GCC 9.1-RC1 Is Being Assembled, GCC 10.0 Development Opens

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • GCC 9.1-RC1 Is Being Assembled, GCC 10.0 Development Opens

    Phoronix: GCC 9.1-RC1 Is Being Assembled, GCC 10.0 Development Opens

    GCC 9 has reached zero "P1" regressions that mark issues of the highest priority. With that list cleared, GCC 9.1 is moving towards release as the first stable version of GCC 9. GCC 9.1-RC1 will be out soon while GCC 10.0 is open on master...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    I guess, ..., the new version will be even slower compiling sources, e.g during development, Linux distribution compiling: https://www.youtube.com/watch?v=BmKUJRa08DE

    Comment


    • #3
      Originally posted by rene View Post
      I guess, ..., the new version will be even slower compiling sources, e.g during development, Linux distribution compiling: https://www.youtube.com/watch?v=BmKUJRa08DE
      This is a standard complaint by almost everyone who is unfamiliar with compiler development. The short answer is that all young compilers are fast, because they don't do much.

      And while its true GCC is slow is Clang also getting slower and not faster. I recommend not to compile with the highest optimisation levels and not with LTO during development and to install ccache as well. GCC 9 enables more optimisations than before on the higher levels and it shows in the compile times. Keep those for release builds.

      Comment


      • #4
        Originally posted by sdack View Post
        This is a standard complaint by almost everyone who is unfamiliar with compiler development. The short answer is that all young compilers are fast, because they don't do much.

        And while its true GCC is slow is Clang also getting slower and not faster. I recommend not to compile with the highest optimisation levels and not with LTO during development and to install ccache as well. GCC 9 enables more optimisations than before on the higher levels and it shows in the compile times. Keep those for release builds.
        But wait what? Actually I'm familiar with compiler development. The short answer: Even as a developer GCC and as recently tested clang become way too slow for my taste. Just what I said in this video. And more optimisations? To make regular binaries how much faster? 1% per major version?

        Comment


        • #5
          It's not just speed of execution, it is now also power usage ...

          Comment


          • #6
            Originally posted by rene View Post
            But wait what? Actually I'm familiar with compiler development. The short answer: Even as a developer GCC and as recently tested clang become way too slow for my taste.
            Just get a faster CPU lol.

            Originally posted by rene View Post
            And more optimisations? To make regular binaries how much faster? 1% per major version?
            People who use your software don't give a shit how long it takes you to compile, they want it as fast (or small) and power efficient as possible.

            Neither should you.

            Comment


            • #7
              Originally posted by rene View Post

              But wait what? Actually I'm familiar with compiler development. The short answer: Even as a developer GCC and as recently tested clang become way too slow for my taste. Just what I said in this video. And more optimisations? To make regular binaries how much faster? 1% per major version?
              zapcc is a caching C++ compiler based on clang, designed to perform faster compilations - yrnkrn/zapcc

              Comment


              • #8
                Originally posted by Weasel View Post
                Just get a faster CPU lol.

                People who use your software don't give a shit how long it takes you to compile, they want it as fast (or small) and power efficient as possible.

                Neither should you.
                If building a whole Linux distribution takes a whole week instead of 2 days I should not care? On a latest and greater AMD Ryzen no less. Also what most people ship is not even that efficient, as most software (Windows, Mac, Linux distributions) target the lowest common ISA, e.g. amd64 w/o the latest and created SIMD extensions. Here I made a vlog for you: https://www.youtube.com/watch?v=-VZmXO381HQ Would be way more amazing if we JIT (e.g. from LLVM byte code) to the actual target machine for really impressive (because AVX and what not) performance gains ;-)

                Comment


                • #9
                  Originally posted by caligula View Post
                  Yeah, we also use cache since over a decade, or this kde thing ice/cc (ice-cream) but that helps little if you test build a whole distribution for regressions, musl libc, etc. where basically everything has to be rebuild due to subtle changes in headers and such.

                  Comment


                  • #10
                    Originally posted by rene View Post

                    If building a whole Linux distribution takes a whole week instead of 2 days I should not care? On a latest and greater AMD Ryzen no less. Also what most people ship is not even that efficient, as most software (Windows, Mac, Linux distributions) target the lowest common ISA, e.g. amd64 w/o the latest and created SIMD extensions. Here I made a vlog for you: https://www.youtube.com/watch?v=-VZmXO381HQ Would be way more amazing if we JIT (e.g. from LLVM byte code) to the actual target machine for really impressive (because AVX and what not) performance gains ;-)
                    You wouldn't even need to do a full JIT. You could compile most things at a base level, and then recompile them more aggressively when they are used heavily, and possibly specialize them with the arguments / types that are most commonly used.

                    This is a really old idea back from the 60s, it is called supercompilation. So far only Javascript really operates that way though

                    I believe LLVM even has the form it has, specifically to make something low level recompilation like that possible
                    Last edited by carewolf; 29 April 2019, 11:23 AM.

                    Comment

                    Working...
                    X