Announcement

Collapse
No announcement yet.

LLVM 2.6 Released, Clang Is Now Production Ready

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • LLVM 2.6 Released, Clang Is Now Production Ready

    Phoronix: LLVM 2.6 Released, Clang Is Now Production Ready

    Version 2.6 of LLVM, the Low-Level Virtual Machine, has been released. This modular compiler infrastructure, which can replace many parts of the GNU Compiler Collection and go far beyond the conventional roles as a code compiler such as being used within Apple's Mac OS X OpenGL implementation for providing optimizations and is similarly going to be used within Gallium3D, has taken a major leap forward with the 2.6 release. LLVM 2.6 includes better x86_64 code generation, new code generators for multiple architectures, support for SSE 4.2, improved optimizations, and perhaps most notably it is the first release to include Clang where it's now at a "production quality" status for C and Objective-C on x86...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    To what extent can LLVM be used on Linux? For example, can it compile the kernel? Can it be used alongside GCC? What are the benefits of using LLVM?

    Comment


    • #3
      my compilers class is making a lua compiler frontend for llvm :-). It's definitely good to start over once in a while, as gcc is old and crusty. I think llvm is a couple years away from being capable of being a replacement for gcc within linux, but it enables so much more already. There's the JIT, optimizations. Actually, apple switched to it as the compiler for iphone.

      Freebsd is trying to switch to it too, and gentoo folks as well. It compiles the freebsd kernel.

      To sum it up, llvm is Low-Level-Virtual-Machine, not GCC take2. The fact that it is capable of compiling into native code is necessary for its mission and nice, but it is much more than gcc. And supposedly it provides a very clean and modern C++ API for devs to mess with.
      Last edited by garytr24; 25 October 2009, 12:54 PM. Reason: added some stuff

      Comment


      • #4
        If such large performance improvements over GCC are possible, why has GCC itself not been changed? It's not short of developers, money or expertise.

        Comment


        • #5
          Last time i checked, the higher optimizationlevels are yet to be implemented (i.e. no profiled optimizations), but maybe this has changed?

          GCC is a behemoth, not easily changed. GCC can create very good code, but sometimes that requires you to know the magic set of flags. From simple benchmarks i've seen GCC fail miserably, even though everthing the code was supposed to be doing was dead simple.


          I like the other features (other than faster compile time, and hopefully faster programs) even more, but I'm not sure what languages can take full advantage of it (things like cross platform llvm bitcode with run-time optimization, yummy!). It sure wont work with C/C++, thanks to #define (and probably other reasons too)

          Comment


          • #6
            one thing GCC can't do at all that modern compilers do according to my compilers prof is profiling and optiimizations that make use of it. And GCC is huge and a pain to work with. It's been around for a really long time. LLVM brings a whole new clean infrastructure for compiler development. C and Objective-C are pretty done, and C++ is WIP.

            Comment


            • #7
              Originally posted by Micket View Post
              Last time i checked, the higher optimizationlevels are yet to be implemented (i.e. no profiled optimizations), but maybe this has changed?

              GCC is a behemoth, not easily changed. GCC can create very good code, but sometimes that requires you to know the magic set of flags. From simple benchmarks i've seen GCC fail miserably, even though everthing the code was supposed to be doing was dead simple.


              I like the other features (other than faster compile time, and hopefully faster programs) even more, but I'm not sure what languages can take full advantage of it (things like cross platform llvm bitcode with run-time optimization, yummy!). It sure wont work with C/C++, thanks to #define (and probably other reasons too)
              why would #defines break it? do you mean like #define WIN32 or LINUX? I think the value would be for say the flash plugin to run on multiple architectures. A usb thumbdrive that can boot a linux OS on your ARM smartbook and netbook or whatever. Apple's using it right now for OpenCl and graphics shaders. If your video card can't support the shaders, the JIT compiles them to the cpu.

              Comment


              • #8
                They're only claiming that 3x compilation performance under certain specific conditions - check out http://clang.llvm.org/performance.html.

                Basically, the Clang front-end seems to be really fast, while the backend optimizers and code generators seem to be running at a similar speed to GCCs.

                Comment


                • #9
                  Originally posted by garytr24 View Post
                  one thing GCC can't do at all that modern compilers do according to my compilers prof is profiling and optiimizations that make use of it.
                  What do the -fprofile-generate and -fprofile-use options do, then?

                  Comment


                  • #10
                    Originally posted by garytr24 View Post
                    why would #defines break it? do you mean like #define WIN32 or LINUX? I think the value would be for say the flash plugin to run on multiple architectures. A usb thumbdrive that can boot a linux OS on your ARM smartbook and netbook or whatever. Apple's using it right now for OpenCl and graphics shaders. If your video card can't support the shaders, the JIT compiles them to the cpu.
                    Almost any platform/architecture related macro will ruin it. Given that #ifdef WIN32 stuff will probably alter large part of the code, it's a given that it wont ever work (if so then we wouldn't need the ifdef to start with), but so will also architecture related defines;
                    From lowlevel things as endianness or limits of basic types, to more exotic changes, which can basically alter any part of the code to become whatever depending on available libraries or architecture.

                    Ex-Cyber, i thought he meant more like "actually put it to good use"? Although, i dont know how efficient the profiled optimizations are in GCC.

                    Comment

                    Working...
                    X