Announcement

Collapse
No announcement yet.

GCC 8 vs. LLVM Clang 6 Performance At End Of Year 2017

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by eltomito View Post
    I wanna go to a stadium, paint GCC on my face, get wasted as I watch GCC beat the shit out of CLANG and then beat up random people on my way home. Yep, that's the kind of avid fan I am!
    Compiler hooligans unite!

    Though if it is a sport I would always root for an underdog.... Yuck, I would have to be a fan of the Mars or Intel compilers..

    Comment


    • #22
      Originally posted by anarki2 View Post
      Yeah, you have MinGW. Now *that* is slow compared to truly native code. OTOH Clang is pretty decent on all platforms.
      This is incorrect, I think you are confusing Cygwin or MSYS2/msys2 with MinGW (or better, mingw-w64). In no sense is mingw-w64 not native. There is no translation layer in play. MSYS2/mingw-w64 has GCC 7.2 and its code-gen and the speed of that code is just as good as GCC on any other platform.

      I like Clang, but I also like GCC, I do not see *any* benefit to a monoculture of compilers, quite the opposite in fact, competition is a good thing in the compiler space as is license variety. They both work really well these days and I am thankful for the hard work of both teams. My job involves building a cross-platform software distribution using all the compilers under the sun and I don't find it particularly problematic to use switch between them using the right one for each platform and job.

      Also Clang on Windows is not really like Clang on Unix. It emulates MSVC's frontend and not GCC's so in terms of build system support work, dumping MSVC for Clang doesn't buy you anything at all. Now if someone were to work on combining the GCC Clang frontend with the MSVC Clang backend then that'd be useful as we could use Autotools with Clang on Windows (and remove most of the custom code from CMake and all the other build systems that exists only to drive MSVC) to generate UCRT-based software instead of being forced to target older C runtimes (which is the only downside of mingw-w64 on Windows).
      Last edited by RayDonnelly; 30 December 2017, 10:04 AM.

      Comment


      • #23
        Originally posted by coder View Post
        Have you ever actually built a RPM? I can only speak of SuSE's toolchain, but it doesn't matter where or on what you build it. It uses a chroot cleanroom to avoid any unintentional dependencies.
        Heh. I have probably built more Linux distro packages than you can count. But this just in... most software doesn't even run on a Linux desktop. You might be very unpleasantly surprised at how some of the software that you depend on day to day gets built.
        Over 10 years of shipping appliances built with gcc -O3 and never had a single compiler bug caused by it ...that wouldn't also happen with -O2, at least.
        In my over 25 years of shipping software built with GCC I have seen many bizarre problems due to invalid assembly generated at -O3 that disappear at -O2. As a toolchain engineer I regularly encounter customers who complain about invalid assembly generated at -O3 that does not occur at -O2. Our certification for safety-critical software systems does not apply if a customer uses -O3.

        Perhaps our sample bases result in different experiences.

        In my experience, the only time I even think about linking performance is when I'm doing an incremental build that updated a few .o files. Otherwise, it's vastly overshadowed by compilation time. I just link via gcc and each shared library explicitly links its dependencies.
        Well, nice for you. Incremental builds make link time overshadow compile time by orders of magnitude for any reasonable large software project, since every single symbol in every single GOT in every single DSO has to be passed through the linker for every partial build. It's only when you're doing a full build that compile time can be longer than link time, and that's only because you're compiling hundreds or thousands of source files.

        Try building Firefox or Gnome Shell, for example.

        No need for Michael to test this - the compiler & stlport teams already have tests and maintain report cards on conformance vs. various standards.
        Yes, I don't recommend he test compliance: there are a few test suites out there that already do that. It doesn't mean compliance should not be taken into account, since otherwise you would be comparing apples and oranges (does an implementation that does not provide an implementation of a required function run faster or slower than one that does?)

        No. I'm not saying nobody uses OpenMP, but not for AI. All of the popular frameworks have optimized back-ends for the various hardware platforms. That's the only thing that makes sense.

        OpenMP is good for getting some low-hanging fruit with minimal effort, but not great for most serious purposes.
        That's interesting. OpenMP is how optimized back-ends for various hardware platforms (other than CUDA) gets used from C, C++ or Fortran. I get plenty of queries from customers looking to use it.

        And as for vectorization, I don't even care about vectorization with any compiler that doesn't support the loop unrolling/pipelining hints I mentioned above.
        That's nice. You seem to be very important. Remember how Apply got into so much trouble at the start of the century for adding their own vectorization code to GCC without passing it upstream? You should have just told them you weren't going to need it, save them the effort.

        Comment


        • #24
          Originally posted by bregma View Post
          In my over 25 years of shipping software built with GCC I have seen many bizarre problems due to invalid assembly generated at -O3 that disappear at -O2. As a toolchain engineer I regularly encounter customers who complain about invalid assembly generated at -O3 that does not occur at -O2. Our certification for safety-critical software systems does not apply if a customer uses -O3.

          Perhaps our sample bases result in different experiences.
          In about 12 years, I've only used about a half dozen versions of GCC for production, mostly on x86/x86_64, and they've tended to be quite mature.

          Originally posted by bregma View Post
          Incremental builds make link time overshadow compile time by orders of magnitude for any reasonable large software project, since every single symbol in every single GOT in every single DSO has to be passed through the linker for every partial build. It's only when you're doing a full build that compile time can be longer than link time, and that's only because you're compiling hundreds or thousands of source files.
          I cannot comment on your experience - only that a single invocation of the linker is typically comparable for me to compiling a single .o file. I used shared libraries and link each against its immediate dependencies.

          To the extent that incremental build times are a problem, I'd look to your buildsystem and software architecture. I don't even bother to disable any optimizations, unless I need to do so for debugging purposes.

          Originally posted by bregma View Post
          That's interesting. OpenMP is how optimized back-ends for various hardware platforms (other than CUDA) gets used from C, C++ or Fortran. I get plenty of queries from customers looking to use it.
          Again, not saying nobody uses it - just none of the deep learning frameworks I've used. Everyone already has optimized libraries for this - AMD has OpenMI and Nvidia has cuDNN. I'm sure Intel has something comparable. Frameworks like Caffe and TensorFlow can be built to use them (though AMD's stuff might still be on their own fork).

          Originally posted by bregma View Post
          You seem to be very important.
          Okay, let's not go there. Nobody wants this to get dirty.

          Comment


          • #25
            Originally posted by anarki2 View Post
            Right now, I don't see any reason for GCC to survive the next decade though. Clang is becoming the common platform among Unix, Windows and macOS. Even on Unix, Linux is the only remnant that refuses to switch to Clang _by default_. Longterm 4.5 and 4.9 are compatible though. Android deprecated GCC back in 2016. Other than a few diehard GPL fans, noone's really dependent on GCC. Tell you what, more and more Visual Studio devs are replacing MSVC with Clang. They use the VS IDE, but not the MS compiler. That's how convenient Clang is. It's only a matter of time before Microsoft also deprecates MSVC - they're already contributors to Clang. If I had to predict, the last nail in the coffin will be Red Hat. Once they move their kernels to Clang, it'll be the end of it all.

            It's not about performance, really. It's about effort vs benefit. When you have your code and you can choose to support 2 compilers or just 1, it only makes sense to dump the platform-specific GCC. Yeah, you have MinGW. Now *that* is slow compared to truly native code. OTOH Clang is pretty decent on all platforms.

            The same thing happened in the browser market. Chrome dominates. Webdevs nowadays only support WebKit/Blink, other engines either add quirks to work the same way or can go screw themselves (read: their users). As an avid Firefox user since around 0.7, I definitely can't say that I'm happy about this, but that's just the way it is. Unification on all levels.

            I'm totally aware that a lot of GCC fans will resist till the day of their death, much like we, to this day, have to endure the folks talking cr@p about systemd even in the most irrelevant discussions, but that won't change the big picture. One swallow does not make a summer. Just look at how Devuan is "thriving". You can't build a successful platform with the premise of refusing to evolve. Cementing the status quo is not what makes the World go round, and it's definitely not how software development works. At all.
            No matter how awesome Clang, Chrome, and systemd are - and I really like all of them - a monoculture is bad.

            So yes, I want GCC, Firefox, and non-systemd init systems to thrive alongside the big muscle in their respective segments.

            Comment

            Working...
            X