Announcement

Collapse
No announcement yet.

LLVM 2.6 Released, Clang Is Now Production Ready

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • LLVM 2.6 Released, Clang Is Now Production Ready

    Phoronix: LLVM 2.6 Released, Clang Is Now Production Ready

    Version 2.6 of LLVM, the Low-Level Virtual Machine, has been released. This modular compiler infrastructure, which can replace many parts of the GNU Compiler Collection and go far beyond the conventional roles as a code compiler such as being used within Apple's Mac OS X OpenGL implementation for providing optimizations and is similarly going to be used within Gallium3D, has taken a major leap forward with the 2.6 release. LLVM 2.6 includes better x86_64 code generation, new code generators for multiple architectures, support for SSE 4.2, improved optimizations, and perhaps most notably it is the first release to include Clang where it's now at a "production quality" status for C and Objective-C on x86...

    http://www.phoronix.com/vr.php?view=NzYzNw

  • #2
    To what extent can LLVM be used on Linux? For example, can it compile the kernel? Can it be used alongside GCC? What are the benefits of using LLVM?

    Comment


    • #3
      my compilers class is making a lua compiler frontend for llvm :-). It's definitely good to start over once in a while, as gcc is old and crusty. I think llvm is a couple years away from being capable of being a replacement for gcc within linux, but it enables so much more already. There's the JIT, optimizations. Actually, apple switched to it as the compiler for iphone.

      Freebsd is trying to switch to it too, and gentoo folks as well. It compiles the freebsd kernel.

      To sum it up, llvm is Low-Level-Virtual-Machine, not GCC take2. The fact that it is capable of compiling into native code is necessary for its mission and nice, but it is much more than gcc. And supposedly it provides a very clean and modern C++ API for devs to mess with.
      Last edited by garytr24; 10-25-2009, 12:54 PM. Reason: added some stuff

      Comment


      • #4
        If such large performance improvements over GCC are possible, why has GCC itself not been changed? It's not short of developers, money or expertise.

        Comment


        • #5
          Last time i checked, the higher optimizationlevels are yet to be implemented (i.e. no profiled optimizations), but maybe this has changed?

          GCC is a behemoth, not easily changed. GCC can create very good code, but sometimes that requires you to know the magic set of flags. From simple benchmarks i've seen GCC fail miserably, even though everthing the code was supposed to be doing was dead simple.


          I like the other features (other than faster compile time, and hopefully faster programs) even more, but I'm not sure what languages can take full advantage of it (things like cross platform llvm bitcode with run-time optimization, yummy!). It sure wont work with C/C++, thanks to #define (and probably other reasons too)

          Comment


          • #6
            one thing GCC can't do at all that modern compilers do according to my compilers prof is profiling and optiimizations that make use of it. And GCC is huge and a pain to work with. It's been around for a really long time. LLVM brings a whole new clean infrastructure for compiler development. C and Objective-C are pretty done, and C++ is WIP.

            Comment


            • #7
              Originally posted by Micket View Post
              Last time i checked, the higher optimizationlevels are yet to be implemented (i.e. no profiled optimizations), but maybe this has changed?

              GCC is a behemoth, not easily changed. GCC can create very good code, but sometimes that requires you to know the magic set of flags. From simple benchmarks i've seen GCC fail miserably, even though everthing the code was supposed to be doing was dead simple.


              I like the other features (other than faster compile time, and hopefully faster programs) even more, but I'm not sure what languages can take full advantage of it (things like cross platform llvm bitcode with run-time optimization, yummy!). It sure wont work with C/C++, thanks to #define (and probably other reasons too)
              why would #defines break it? do you mean like #define WIN32 or LINUX? I think the value would be for say the flash plugin to run on multiple architectures. A usb thumbdrive that can boot a linux OS on your ARM smartbook and netbook or whatever. Apple's using it right now for OpenCl and graphics shaders. If your video card can't support the shaders, the JIT compiles them to the cpu.

              Comment


              • #8
                They're only claiming that 3x compilation performance under certain specific conditions - check out http://clang.llvm.org/performance.html.

                Basically, the Clang front-end seems to be really fast, while the backend optimizers and code generators seem to be running at a similar speed to GCCs.

                Comment


                • #9
                  Originally posted by garytr24 View Post
                  one thing GCC can't do at all that modern compilers do according to my compilers prof is profiling and optiimizations that make use of it.
                  What do the -fprofile-generate and -fprofile-use options do, then?

                  Comment


                  • #10
                    Originally posted by garytr24 View Post
                    why would #defines break it? do you mean like #define WIN32 or LINUX? I think the value would be for say the flash plugin to run on multiple architectures. A usb thumbdrive that can boot a linux OS on your ARM smartbook and netbook or whatever. Apple's using it right now for OpenCl and graphics shaders. If your video card can't support the shaders, the JIT compiles them to the cpu.
                    Almost any platform/architecture related macro will ruin it. Given that #ifdef WIN32 stuff will probably alter large part of the code, it's a given that it wont ever work (if so then we wouldn't need the ifdef to start with), but so will also architecture related defines;
                    From lowlevel things as endianness or limits of basic types, to more exotic changes, which can basically alter any part of the code to become whatever depending on available libraries or architecture.

                    Ex-Cyber, i thought he meant more like "actually put it to good use"? Although, i dont know how efficient the profiled optimizations are in GCC.

                    Comment


                    • #11
                      Originally posted by Game_boy View Post
                      If such large performance improvements over GCC are possible, why has GCC itself not been changed? It's not short of developers, money or expertise.
                      First, the GCC codebase is disgustingly hard to work with. It's a giant pile of hacky C code, where as LLVM and Clang are (relatively) cleanly written C++ with intentional effort put into making it easy to hack on.

                      Second, RMS and the FSF are tools tbat only just recently stopped trying to actively make GCC near impossible to reuse. Clang was designed to be very easily used for third-party tools, like static analyzers or IDE integration, which RMS did not want to make possible with GCC out of an insane fear that proprietary developers would abuse any kind of parser API in a non-Free project. It took years of developers and users complaining and the looming threat of LLVM+Clang to get him to change his tune.

                      Third, LLVM has for a long time now had capable interprocedural analysis and optimization, while GCC has only just started to get support for such things via a rather hacky and kludgy approach. IPO allows for greater optimization of a program or library, and has been supported in various commercial compilers for some time.

                      Finally, LLVM has that "VM" part to it, while GCC is just a compiler. This opens up a lot of possibilities for Clang, including interpreting C/C++ code, which opens up a lot of possibilities for future compiler experimentation (say, a REAL meta-programming facility in C++, instead of the shitacular abuse of templates that people pass off as meta-programming today).

                      GCC already died once (when it was replaced with the fork EGCS), and there's no reason to fear letting it die again. ALL projects die eventually. Linux itself will die someday. Maybe not anytime soon, but it'll happen. Heck, Windows already died twice, too. Clang could replace GCC, and it can only be an improvement (because if it isn't, it won't ever replace GCC in the first place).

                      Comment


                      • #12
                        I think I get what you're saying, but llvm has its own type system that for the most part is independent of machine code. Not sure about endianness. I don't think it was ever meant for compiling win32/linux binaries that run under different OS's. That would require runtime libraries to be coherent under both and a proper interface. I think the problem goes away if llvm has pragmas or something for that. The programmer would have to know enough about what he's doing to guarantee that the code is JITable..
                        Last edited by garytr24; 10-25-2009, 06:46 PM.

                        Comment


                        • #13
                          Originally posted by Micket View Post
                          Ex-Cyber, i thought he meant more like "actually put it to good use"? Although, i dont know how efficient the profiled optimizations are in GCC.
                          I don't know how good they are, but they're definitely used. If you bootstrap the gcc compilation, you can see it generating and using the profile when building itself. So I'm guessing it has at least some benefits, or they wouldn't bother enabling it for the compiler.

                          Maybe he meant LLVM doesn't have a profiler based optimizations? From what i could find, it does have a framework setup for them, but not a lot of optimizations based off of it.

                          Comment


                          • #14
                            Originally posted by Ex-Cyber View Post
                            What do the -fprofile-generate and -fprofile-use options do, then?
                            There are two kinds of GCC optimizations:
                            1. those enabled by -O2
                            2. those that backfire, crash GCC, generate bad code, or are otherwise not reliable

                            The flags you mentioned are of the latter sort, as the Mozilla devs have discovered with Firefox. Gentoo ricer learn this the hard way too.

                            Thus I think its fair to disregard GCC's alleged support for profiling. At least until Firefox can have Profile Guided Optimization on Linux.
                            Last edited by StringCheesian; 10-25-2009, 07:14 PM.

                            Comment


                            • #15
                              Originally posted by garytr24 View Post
                              one thing GCC can't do at all that modern compilers do according to my compilers prof is profiling and optiimizations that make use of it. And GCC is huge and a pain to work with. It's been around for a really long time. LLVM brings a whole new clean infrastructure for compiler development. C and Objective-C are pretty done, and C++ is WIP.
                              You should change professor, Gcc has had profile-guided-optimization from atleast v 3.4

                              Comment

                              Working...
                              X