Announcement

Collapse
No announcement yet.

Intel Ivy Bridge: GCC 4.8 vs. LLVM/Clang 3.2 SVN

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Clangs better error diagnostics...

    5) Clang offers significantly better error diagnostics. Yes, this matters. Calling it "form over function" is absolutely ridiculous -- the diagnostics _are the freaking function_ in a developer tool, for ****'s sake. I compile a release mode version of an application a few times per release. I develop, test, and debug code thousands of times per release. If I were forced to choose only one compiler, I'm going to choose the one that saves me massive amounts of time during development rather than the one that has a 5% speed boost, and I'll take all the money saved in my time and just buy a faster CPU. (Of course, in time, it won't be a trade-off anymore, as per point 1.) To say that "performance matters most" is just flat out preposterous and wrong: even as a AAA game developer, it really just isn't that big of a difference to care. (Granted, we can do dev in the intelligent compiler and then compile in release mode in the optimizing compiler. Having both is good!) We care about shaving off engineer time, cutting hundreds of thousands of dollars from costs. We do not care about the minimum specs being "6 year old computer" instead of "6.3 year old computer."
    Umm GCC people are disputing your claims

    Comment


    • #12
      Originally posted by elanthis View Post
      Sigh.

      1) A year ago Clang/LLVM was nowhere close to competing with GCC. Now it not only is caught up on some benchmarks, but actually ahead on others. And yet some doofuses want to make claims like "it will never catch up." Currently behind, yes, but it's improving at a much faster rate than GCC does.
      It's a lot easier to improve when you start out with nothing. There is absolutely no reason to suspect that improving from completely unoptimized, barely-working codegen (which is, necessarily, how LLVM/Clang or any compiler starts out its life), to matching 75-80% of gcc's performance, is an indicator that they'll achieve the last 20-25% delta and even pull ahead.

      In all of the benchmarks I've seen so far, all of LLVM's so-called "wins" are statistically insignificant, i.e. within 5%. Furthermore, not all of gcc's wins are due solely to the absence of OpenMP on LLVM; some of the benchmarked programs don't use OpenMP at all, yet still exhibit a 20% delta favoring gcc. Clearly, llvm has a long way to go. The problem is that the last mile is 10 times harder than the previous hundred miles; the last 10 feet are 10 times harder than the last mile; the last inch is 10 times harder than the last foot; and so on (the analogy works better with the metric system but I'm too lazy to edit what I typed).

      Unless llvm literally copy and pastes much of the micro optimization stuff from gcc, there is no reason to think that they will implement those expensive optimizations in any sort of reasonable timeframe. Look how long it took gcc to develop them. Moreover, there wasn't really any other good open source competition to gcc at the time that the gcc devs were developing those optimizations, so they pretty much had to do them from scratch. Now, you might say that llvm developers could just study the overall algorithmic approach that gcc takes to optimizing so well, and base their own optimizations on that. But by your own admission, the internal architecture of gcc and llvm are wildly different. So llvm will not be able to easily copy and paste from gcc, even if they were permitted to do so by the license, because of the difference in architecture. In short, llvm will have to do things "mostly" from scratch, whereas gcc had to invent optimizations "entirely" from scratch.

      Originally posted by elanthis View Post
      2) GCC's internals suck. Even many of the people who work on GCC agree that it sucks. Companies like Facebook and Google are moving to Clang because GCC costs them far more money to develop with, even if they do perhaps use GCC for release mode compiles.
      I can perhaps understand Google working with GCC's internals. But why in God's name would an application development company like Facebook have to even look at the GCC source code? I mean, come on -- as the author of C, C++, Vala, C# and Java codebases spanning upwards of 50k SLOC, I've never once encountered some programming problem or compiler error within my program and thought "gee, I'd better try and hack on the compiler". You have to be writing some pretty edge-case, non-portable code to even reach that point. Stick with well-supported constructs and paradigms (design patterns) and you can type out hundreds of lines of code at a time with an extremely small warning/error rate, without even using an IDE. It's not rocket science.

      Anyway, the whole point of a compiler is that it is a tool; it's not the end product in itself. Having complex internals vs. well-documented and elegant internals does not add any sort of value to the end product. Having a compiler that produces fast code (or small code, depending on your needs) is a value-added attribute. Having a compiler that is simply well-designed is not, by itself, a value-added attribute. If forced to choose between a compiler that has more/better value-added attributes versus one that does not, it should be a no-brainer for anybody who has ever taken a business class, or even anyone whose goal is to deliver high quality products to whomever their customers are.

      Originally posted by elanthis View Post
      3) Clang has tool support. GCC does not, and probably never will, _by design_. This is what the doofuses never seem to understand. Nobody really gives a crap if LLVM compiles code that runs faster. Really, it doesn't matter. What does matter is that you can write high-quality analysis tools, refactoring tools, IDEs, etc. with Clang. You cannot write them with GCC, because GCC goes out of its way to stop people from doing it. You can completely remove the actual ability to output machine code from Clang and it would still be one of the most important projects around for C++ developers.
      I'm a big fan of developer tools and anything that enhances productivity. But I've seen some pretty incredible stuff with Eclipse CDT's gcc integration, and even more incredible stuff with Visual Studio's integration with the Microsoft C++ compiler. But neither gcc nor the Microsoft C++ compiler have "tool support" in the same way that Clang/LLVM do. So how is it possible that both the open source community and a proprietary company have used inferior toolchains to produce superior IDEs? Maybe I'm missing something, but I can refactor C++ code pretty damn well with Eclipse CDT or Visual Studio.

      I recall Michael posting an article/video a few months ago about some developer working on advanced tooling using LLVM, and I remember being pretty impressed. I mean, if you take this kind of thing to its logical extreme, C++ could almost start to approach the maintainability and productivity of Java, which is a huge feat for such a terrible language. So why don't we invest all those man-hours into the slow LLVM/Clang to make it as easy to develop with as Java, just so we can say C++ is the best? Meanwhile our "C++" will be almost as slow as Java because we aren't using a compiler that has fully explored runtime performance optimization.

      Originally posted by elanthis View Post
      4) The only areas that you see very large differences in performance between Clang and GCC are in OpenMP bound tests. That will be fixed, and when the feature lands, most of the gaps will shrink. There will no longer be massive crazy differences in performances on any benchmark for people to point out and sensationalize. Apps that don't use OpenMP (e.g., most of them) likewise don't see huge gaps in performance between GCC and Clang. Again, GCC may lead on most (but apparently not all) tests, but only by small margins that really just don't matter.
      For certain classes of program, sure, 10% performance doesn't matter. Common desktop software is to the point that it barely scratches the surface of what a modern CPU is capable of. But anything computationally-intensive is going to care a lot about even a 1% performance delta, to say nothing of 10%. You may not be able to notice the difference between Clang and gcc for Firefox or GNOME, but everything from video processing to scientific applications is extremely performance-sensitive, and slow-performing code can impact schedules by hours, days or even weeks, or require hardware upgrades that wouldn't be necessary otherwise.

      Originally posted by elanthis View Post
      5) Clang offers significantly better error diagnostics. Yes, this matters. Calling it "form over function" is absolutely ridiculous -- the diagnostics _are the freaking function_ in a developer tool, for ****'s sake. I compile a release mode version of an application a few times per release. I develop, test, and debug code thousands of times per release. If I were forced to choose only one compiler, I'm going to choose the one that saves me massive amounts of time during development rather than the one that has a 5% speed boost, and I'll take all the money saved in my time and just buy a faster CPU. (Of course, in time, it won't be a trade-off anymore, as per point 1.) To say that "performance matters most" is just flat out preposterous and wrong: even as a AAA game developer, it really just isn't that big of a difference to care. (Granted, we can do dev in the intelligent compiler and then compile in release mode in the optimizing compiler. Having both is good!) We care about shaving off engineer time, cutting hundreds of thousands of dollars from costs. We do not care about the minimum specs being "6 year old computer" instead of "6.3 year old computer."
      "Having both is good" is the first sane thing you've said. And of course good error messages matter for large scale applications where triaging a problem is prohibitively difficult. But you're overlooking one thing.

      Now, more than ever, gcc is poised to be able to produce better diagnostics than it has in the past. With the introduction of C++ into gcc's source code, internal APIs are being rewritten in an object-oriented manner, replacing old spaghetti code with a layered architecture that at least belongs in the same discussion as LLVM's architecture, even if LLVM is "even more layered" or "even more well-designed".

      The point is, while LLVM is trying to catch up to gcc's performance, gcc is trying to catch up to LLVM's usefulness to developers. People are working on both sides to make both compilers better in their weaknesses. Claiming that gcc's complex/unmaintainable internals make it unable to match LLVM in the long run is plain wrong, if for no other reason than the fact that gcc's internals are actively being rewritten as we speak. But (hopefully) they'll be maintaining all of the optimizations they have today, just sitting them on top of more object boxes, separating it out into more shared libraries, etc. to make the code more maintainable.

      I disagree, however, that the point of a compiler is to provide good diagnostics. I would rather that the compiler focus on what it does best -- compiling -- and run a different, separate tool that tells me why my code is bad.

      Actually, I'm a huge fan of clang analyzer. That is exactly the kind of application that LLVM is best at. All hail the open source equivalent to Coverity, which I hope will in time produce even better diagnostics than Coverity itself, and then some.

      In my perfect world, clang would -- as you said somewhere above -- be unable to actually codegen a built and linked binary. Its sole focus would be on helping developers improve their code by eliminating incorrect code (compiler errors, inadvisable practices, slow code, non-standard-compliant code, and so on). Clang could very easily fit into the open source ecosystem this way.

      If Clang DOES fully catch up with gcc on performance of compiled code, that's great -- but I think it unlikely because clang is much more valuable if efforts are concentrated on its diagnostics, which you yourself admitted are the most important aspect of a developer tool (not to be confused with a compiler). Clang seems more focused on being a developer tool from the get-go, so why not just push that angle and leave the release builds to gcc?

      Originally posted by elanthis View Post
      6) Clang and LLVM both are essential to the current Linux graphics stack. Anyone posting on a site like Phoronix and insulting Clang or LLVM is a very sad, confused individual. Unless you don't want optimized shaders, efficient software rendering, and OpenCL support, I guess.
      This is an honest question, because I don't know the answer for certain; but what in the world does Clang have to do with the Linux graphics stack? AFAIK only the core LLVM libraries are used. Sure, you can compile Mesa with clang; but it'll be needlessly slower than compiling it with gcc. I swear I've built and linked many a build of r600g and nouveau with the LLVM libraries installed but without the clang binary installed on my system. Maybe you're alluding to the fact that developers are using clang analyzer to help them diagnose their code? That's fine, for a developer tool, but that still doesn't make it a good compiler for the purpose of fast codegen.

      Originally posted by elanthis View Post
      Clang/LLVM is developed at a ridiculously rapid pace by developers from a wide range of companies (not just Apple), and a very significant portion of the professionals who actually use compilers in advanced scenarios (rather than cluelessly debating them on forums and occasionally compiling something) are migrating from GCC to Clang. Probably a reason for that. Probably at least 6 of them. Of course GCC is not irrelevant today, and is not going to be irrelevant any time soon, but to doubt that Clang is becoming more and more relevant is just silly. It will be some time before it supplants GCC on Linux, if it ever does, but frankly who cares? "Is default compiler on Linux distros" is about as important as "is default desktop background."
      Okay, smartass. Insulting everyone who uses gcc is just as stupid as insulting everyone who uses LLVM. As it stands right now, they serve vastly different purposes, which doesn't make them incompatible, and there's no reason not to use both of them as the situation demands. The truly intelligent engineer won't just blindly migrate to the latest fad project because it's new and cool; they will use whatever best fits the situation at hand. So let's TL;DR this whole discussion:

      First, as a premise: the year is 2012. Let's not get ahead of ourselves here.
      • Are you compiling a release build? Then use GCC!
      • Are you already familiar with GCC's error messages and know what to do when you see one? Then use GCC!
      • Are you stumped by an error, or looking to improve the quality of your code? Then use clang or clang-analyzer!
      • Are you writing a graphics driver and don't have the manpower / time to develop your own optimizing shader compiler? Then use LLVM!


      Now, the year is 2014. Oh, wait -- I was about to make a similar list as above but with what the situation will be in 2014, but then I realized that I misplaced my crystal ball. Maybe you have it over there and can find it for me, elanthis?

      Comment


      • #13
        It would have made sense from my perspective to include results from the latest released versions of Clang and GCC, i.e. to see how their performance has evolved rather than start another pointless GCC vs. Clang bash.

        Comment


        • #14
          Originally posted by allquixotic View Post
          I mean, if you take this kind of thing to its logical extreme, C++ could almost start to approach the maintainability and productivity of Java, which is a huge feat for such a terrible language.
          Writing ten page essays and then hiding gems like this in the middle. 8/10 trolling.

          Comment


          • #15
            Originally posted by allquixotic View Post
            This is an honest question, because I don't know the answer for certain; but what in the world does Clang have to do with the Linux graphics stack? AFAIK only the core LLVM libraries are used. Sure, you can compile Mesa with clang; but it'll be needlessly slower than compiling it with gcc. I swear I've built and linked many a build of r600g and nouveau with the LLVM libraries installed but without the clang binary installed on my system.
            Clang can parse OpenCL source code and produce LLVM bytecode from it, you dick head.
            You need clang in the Linux compute stack.

            Comment


            • #16
              If i'm not mistaken, GCC's focus was never about performance. They've always said, a fully cross-platform and feature full compiler was their primary goal.

              Comment


              • #17
                Originally posted by allquixotic View Post
                Unless llvm literally copy and pastes much of the micro optimization stuff from gcc, there is no reason to think that they will implement those expensive optimizations in any sort of reasonable timeframe.
                4 years ago, MSVC's compiler was generally considered to produce _much_ better optimized code than GCC, to the point that folks on proprietary OSes wondered what the hell people on Linux were thinking in using such a crappy compiler. Yet here GCC is today, producing _much_ better code in general than cl.exe does. Shock and surprise! Compilers can add optimizations! Stop the presses!

                Optimizations aren't waiting for some super secret magic voodoo that only one guy working on one compiler knows about. They're just waiting for someone to spend the time to implement them, after chewing through the long list of other things to do.

                I can perhaps understand Google working with GCC's internals. But why in God's name would an application development company like Facebook have to even look at the GCC source code?
                Facebook employs an entire compiler division, who currently all work Clang. Quite simply, Facebook writes craploads of C++ code, has to maintain codebases and performance levels similar to what Google does, and -- again -- developer tools are the most important thing there possibly are to invest in if you're a company that pays tons of engineers to develop code. Investing in compilers saves Facebook money. Hence, they invest in compilers. And after researching, they invested in Clang, not GCC.

                Facebook also invests in tons of other things that you apparently don't think they have any right investing in. I suppose you think all they do is run PHP, MySQL, and some RHEL Linux machines to run their big ol' website, just like your personal Wordpress site runs on a LAMP machine in your bedroom closet. Same thing, right?

                Stick with well-supported constructs and paradigms (design patterns) and you can type out hundreds of lines of code at a time with an extremely small warning/error rate, without even using an IDE. It's not rocket science.
                The difference in efficiency is massive. yes, you _can_ write code without good tools or diagnostics. However, you will write more and better code with better error handling and tools.

                God knows I thought like you do for years, having let myself become convinced that Vim and GDB were 10000x better than any IDE could ever be. Heck, I _hated_ Visual Studio the first time I was forced to use it; it didn't have any cool text manipulation keybindings at all, and its regular expression search-and-replace was borderline useless! Of course, the difference in efficiency I find in myself now when I switch between Vim on Linux servers and VS on Windows desktop is ridiculously huge, which is why I pay attention to Clang and the integration it's gaining in tools like Vim. Sure, I still want VS to improve more than it ever will... which is why Clang allowing some new and improved IDEs to start taking over where VS currently reigns is amazing.

                If you don't believe me, fine. It's not my time or money you're wasting, so no skin off my back.

                Anyway, the whole point of a compiler is that it is a tool; it's not the end product in itself. Having complex internals vs. well-documented and elegant internals does not add any sort of value to the end product. Having a compiler that produces fast code (or small code, depending on your needs) is a value-added attribute. Having a compiler that is simply well-designed is not, by itself, a value-added attribute. If forced to choose between a compiler that has more/better value-added attributes versus one that does not, it should be a no-brainer for anybody who has ever taken a business class, or even anyone whose goal is to deliver high quality products to whomever their customers are.
                The point of the internals is that it _enables_ value-add. It enables analysis tools, IDE integration, refactoring tools, experimentation and research, project-specific plugins, C++ transformation tools, etc. etc. etc.

                I'm a big fan of developer tools and anything that enhances productivity. But I've seen some pretty incredible stuff with Eclipse CDT's gcc integration, and even more incredible stuff with Visual Studio's integration with the Microsoft C++ compiler.
                Neither of them do what they do by integrating with the compiler. CDT has its own C++ "parser," as does MSVC. Neither of them actually parse C++ correctly or completely, either, which is why they both do only very basic things, get stuff wrong a lot, get confused and give up a lot, etc. The tools you're talking about are also very basic: simple code-completion popups and a small handful of refactoring tools, nothing close to what the Java/C# tools offer.

                You'd think that the CDT folks wouldn't be interested in Clang as much as they are if your opinion of their tech was shared by the people who actually develop it.

                In fact, just off the top of my head I know that CodeBlocks, Eclipse CDT, and Qt Creator are all already working on Clang integration. Must be a conspiracy!

                Meanwhile our "C++" will be almost as slow as Java because we aren't using a compiler that has fully explored runtime performance optimization.
                That is sensationalist nonsense. The performance difference is not that large, and again -- the performance will improve. LLVM is very obviously not at its peak of performance. There are still a lot of low-hanging fruit that simply hasn't been picked yet because -- as you mentioned -- the project was basically at 0% just a few short years ago.


                You may not be able to notice the difference between Clang and gcc for Firefox or GNOME, but everything from video processing to scientific applications is extremely performance-sensitive, and slow-performing code can impact schedules by hours, days or even weeks, or require hardware upgrades that wouldn't be necessary otherwise.
                All of which are niche applications. GCC of course still exists, and you retain the option to even do primary development with Clang and then release mode optimized compilers with GCC. I said that repeatedly.

                If you're working on iTunes or a kernel or one of the few scientific apps that the hundreds of millions of regular computer users consume (lawlz), by all means GCC is your primary choice in compiler. For everyone else... either use both, or if you're lazy and only use one, use the one that saves you time and headaches (Clang).

                Now, more than ever, gcc is poised to be able to produce better diagnostics than it has in the past. With the introduction of C++ into gcc's source code, internal APIs are being rewritten in an object-oriented manner, replacing old spaghetti code with a layered architecture that at least belongs in the same discussion as LLVM's architecture, even if LLVM is "even more layered" or "even more well-designed".
                So your reasoning is that it will be easier to rewrite massive chunks of GCC's horribly bad, aging architecture than it is to add some optimization passes to LLVM's clean codebase? Brilliant.

                The point is, while LLVM is trying to catch up to gcc's performance, gcc is trying to catch up to LLVM's usefulness to developers.
                No, it isn't. They're working on diagnostics. They are still outright hostile to ideas like embedding GCC in an IDE, or allowing GCC to output intermediary code, or using GCC inside of Mesa for CL support, allowing the internals to support source-to-source transformations, etc. etc. etc. Literally outright opposed to making it possible, for fear that if GCC did then some evil proprietary company might use GCC's backend for a proprietary frontend or vice versa.

                I disagree, however, that the point of a compiler is to provide good diagnostics. I would rather that the compiler focus on what it does best -- compiling -- and run a different, separate tool that tells me why my code is bad.
                That's fine. That tool must have a C++ parser. That parser is Clang. Tada.

                If Clang DOES fully catch up with gcc on performance of compiled code, that's great -- but I think it unlikely because clang is much more valuable if efforts are concentrated on its diagnostics,
                You're under the impression that the people working on the frontend are the same people working on the backend, and that they're somehow splitting their efforts.

                This is not the case. They are different areas for people to be interested in. There are people whose sole interest is in test case writing, in writing standard libraries, in writing tools, or in writing the support code infrastructure that the frontend and backend both use.

                Ripping out the backend is not going to magically make all the people who are interested or skilled in low-level optimization passes or so on suddenly become interested in writing frontend code, nor is the presence the backend causing a bunch of folks with an interest in the frontend to grudgingly spend time writing backend codegen.

                This is an honest question, because I don't know the answer for certain; but what in the world does Clang have to do with the Linux graphics stack?
                Clang is being used for the OpenCL compilers in Mesa.

                Insulting everyone who uses gcc is just as stupid as insulting everyone who uses LLVM.
                I did no such thing. I insulted people who irrationally deride Clang for no reason. The set of people who use GCC and the set of people who hate Clang are not one and the same.

                Maybe you have it over there and can find it for me, elanthis?
                it turns out that making predictions based on technology trends and following what projects are doing is actually pretty easy. You obviously disagree, but then you're also obviously not following the projects anywhere besides the little articles on Phoronix (and even then not that closely, apparently, if you missed Clang's presence in Mesa).

                I bet you also are stumped how stock brokers manage to make money rather than having an even 50/50 split in returns vs losses, huh?

                In any event, _tons_ of companies are investing heavily in Clang. Only a handful are investing in GCC. If nothing else, the manpower is going to slowly shift from GCC's aging developers to Clang's younger energized developers.




                The future is not too hard to figure out if you pay attention instead of being defensive and emotionally attached to a freaking piece of software. There are surprisingly few surprises to those who stay informed and on top of the state of things. There is not going to be some point in time that suddenly the world goes upside down and suddenly one software package is better than the other without months and years of foretelling human endeavors leading up to that moment.

                Not that I should be surprised; emotional attachment to software just because "it's what Linux already uses" seems to be the rule rather than the exception on Phoronix. Maybe it has something to do with nerds not being sports fans? If you're not rooting for a sports team, I guess the human psychology needs to pick _something_ to irrationally favor and cheer on because that's what we do, so nerds do so with software. Patriotism, team spirit, school spirit, bloods or crips, bud light or coors, software fanboyism... yay humanity.
                Last edited by elanthis; 18 August 2012, 01:59 PM.

                Comment


                • #18
                  elanthis, what is your opinion about compiling Linux kernel with clang/llvm? Is it worth the effort to make clean clang/llvm compiled Linux distro?

                  Comment


                  • #19
                    Sigh.

                    Originally posted by elanthis View Post
                    1) A year ago Clang/LLVM was nowhere close to competing with GCC. Now it not only is caught up on some benchmarks, but actually ahead on others.
                    Assuming you are speaking of the 7-zip compression test, it's totally pointless as it doesn't set any optimization at all, meaning GCC defaults to -O0, which means no optimization at all, which is used for debugging purposes. I have posted tests (with script) here before which tests both 7-zip's builtin synthetic benchmark and a real-life test using -O1 to -O3 and Clang/LLVM hasn't gotten closer to GCC at all. Again this Phoronix test shows nothing by Micheal's total cluelessness when it comes to compiler testing, almost as bad as testing encoders with assembly optimizations enabled when comparing compiler efficiency, I mean wtf?

                    Originally posted by elanthis View Post
                    And yet some doofuses want to make claims like "it will never catch up." Currently behind, yes, but it's improving at a much faster rate than GCC does.
                    Based upon what statistics?

                    Originally posted by elanthis View Post
                    2) GCC's internals suck. Even many of the people who work on GCC agree that it sucks.
                    Show me the posts where the GCC developers say the 'internals suck'. That they can be better or that there is something else with better internals does not equal 'suck' unless you are a troll with an agenda, like you.

                    Originally posted by elanthis View Post
                    Companies like Facebook and Google are moving to Clang because GCC costs them far more money to develop with, even if they do perhaps use GCC for release mode compiles.
                    Enough with the bullshit, I have no idea or interest in what Facebook does, but Google is not 'moving' to Clang, they are using and developing for both projects, they have full-time employed developers working only on GCC like Diego Novillo, they added and are maintaining Go support for GCC (Ian Lance Taylor), they have other guys working on getting the Address-Sanitizer into GCC trunk (90%) done, and there are other GCC based projects from Google.

                    Originally posted by elanthis View Post
                    Clang has tool support. GCC does not, and probably never will, _by design_.
                    Clang only has tool support on OSX last I checked, and it's proprietary and ONLY able to run OSX. And afaik the GCC plugin architecture exposes the internal GCC api you need to integrate your <insert tool here> with GCC. An example would be the GCC Python Plugin.

                    Originally posted by elanthis View Post
                    Nobody really gives a crap if LLVM compiles code that runs faster.
                    Like f*ck they don't

                    Originally posted by elanthis View Post
                    but only by small margins that really just don't matter.
                    To you perhaps (although likely you are just saying it doesn't matter because you'd say anything to defend you notion that LLVM is a gift from god) but for tons of others a difference of 10-20% performance is NOT neglible and DOES matter, because we are doing more taxing things with our computers than running an IDE or notepad.

                    Originally posted by elanthis View Post
                    Clang offers significantly better error diagnostics. Yes, this matters.
                    Yes this is where Clang really shines from a user standpoint and it is the main reason I have Clang/LLVM in my arsenal together with GCC, it's probably best in class in this respect (I have very little experience with commercial compilers outside of ICC) and it's deserved.

                    GCC devs are targeting this area for improvement while fully acknowledging Clang/LLVM :


                    Originally posted by elanthis View Post
                    and a very significant portion of the professionals who actually use compilers in advanced scenarios (rather than cluelessly debating them on forums and occasionally compiling something) are migrating from GCC to Clang.
                    Please stop trying so hard to paint this picture of developers abandoning GCC with your anecdotal 'evidence' which consists of your claims of what is happening 'on forums'. The reality is that despite your claims of the contrary, GCC is going as strong as ever, which is good for everyone except crazy fanboy/zelots like you and others in this thread, as choice is a GOOD THING, and competition is a GOOD THING.

                    Originally posted by elanthis View Post
                    It will be some time before it supplants GCC on Linux, if it ever does, but frankly who cares? "Is default compiler on Linux distros" is about as important as "is default desktop background."
                    Or just about as important as the 'default compiler on OSX'. Problem with Clang/LLVM in this respect is that currently it doesn't even compile the Linux kernel and it's unlikely it will as it demands a concentrated effort (just as it was with getting Clang/LLVM compiling FreeBSD, requiring changes both in FreeBSD and Clang/LLVM) where it seems there is no real effort being made. So no, I don't think it will ever supplant GCC on Linux, but also like you said it really has no large impact as it can be very useful outside of the actual kernel.

                    Comment


                    • #20
                      Sigh. This isn't worth arguing ad nauseum with people who have zero investment in any of it. I give up. GCC was the bestest compiler ever 3 days before the first line of code was written for it, and it will be the bestest compiler ever until the very end of the human race. RMS is a God among men. Linux as it exists right now today will never ever change because it is the most perfectest OS ever, and change is evil and all change is a plan by Microsoft to subvert and destroy Freedom. God bless the GPL.

                      Comment

                      Working...
                      X