Page 2 of 3 FirstFirst 123 LastLast
Results 11 to 20 of 30

Thread: Building Gentoo Linux With LLVM/Clang

  1. #11
    Join Date
    Oct 2011
    Posts
    224

    Default

    Quote Originally Posted by artivision View Post
    If in the future, will be able to compile to VM like LLVM does, we will not need to target only one processor with one binary. For example Android must use LLVM for Java and C++ and stop be a Java toy under patent attack. Another example is that if closed source programs and games, are compiled with LLVM, then will be compatible with all processors. Some one has to develop the way.
    LLVM does not compile to a VM or interpreter. It is a compiler that compiles to machine code just like GCC.
    People do not target one processor with one binary, with GCC you can download a binary that works on every processor. I think you mean architecture. Such as x86 and ARM. There are many x86 processors, some from intel, some from AMD, but one x86 binary works on all chips (unless you use processor specific CFLAGS but nobody does that in distributed binaries).

    As for patents against android, that is not related to Java. There was a law suit from Oracle against Google about Java, but Google won.

    Another example is that if closed source programs and games, are compiled with LLVM, then will be compatible with all processors. Some one has to develop the way.
    Again every compiler does this already. When you download software from the internet, it works on your machine right? It not compiled for you specific processor, but just x86.

    As for one binary for every architecture, (like x86, ARM, PPC, MIPS), that can be done with GCC. GCC supports ARM and x86, so a port is just a recompile right? Wrong, many programs use bits of assembly for very intensive parts (many games and graphics engines). Also it isn't always as easy a just a recompile.

    What you are describing is an interpreted language like python. The interpreter has been ported to many architectures and bytecode will run on the interpeter the same way as if it was on x86 or ARM. Compiling C++ like this is stupid as it defeats the whole purpose. Interpreters and VMs kill performance.
    Actually what you describe just produces bytecode for a VM. There are no binaries.

  2. #12
    Join Date
    Jun 2011
    Posts
    809

    Default

    Quote Originally Posted by uid313 View Post
    Cool!
    Linux on x86-64 now apparently compiles with LLVM according to that website.

    <...snip...>

    Are much patches needed to compile the Linux kernel under LLVM/Clang?
    Why even ask this, when you have already visited the website, which provides details, including where to grab the tools/patches required? if you are really that curious, wouldn't it makes sense to have a look? ; git clone http://git.linuxfoundation.org/llvm-setup.git

    this git branch takes 2 seconds to download, and shows you what patches are required (not only for the kernel, but also Clang and LLVM). it also provides a README that explains how to use llvm-setup for building an Clang/LLVM-compiled kernel....as for patches required;

    51 patches for x86_64 (itself)
    7 patches for LLVM
    2 patches for Clang

    plus other scripts / tools such as wrappers to allow clang/LLVM to substitute for GCC (not unlike the initial patches from LinuxDNA for Intel's C compiler, ICC - from a few years ago).... Now, i would imagine the second you tried to compile a more custom kernel, ie: patched for other things ...or started enabling features that have been disabled in their 'test' configs ~ you would probably run into a lot of other problems.

  3. #13
    Join Date
    Jun 2008
    Location
    Perth, Scotland
    Posts
    433

    Default

    Fixing hardcoded CFLAGS is something Gentoo puts effort into anyway since it can cause other problems.

  4. #14
    Join Date
    Jun 2009
    Posts
    1,020

    Default

    Quote Originally Posted by n3wu53r View Post
    LLVM does not compile to a VM or interpreter. It is a compiler that compiles to machine code just like GCC.
    People do not target one processor with one binary, with GCC you can download a binary that works on every processor. I think you mean architecture. Such as x86 and ARM. There are many x86 processors, some from intel, some from AMD, but one x86 binary works on all chips (unless you use processor specific CFLAGS but nobody does that in distributed binaries).

    As for patents against android, that is not related to Java. There was a law suit from Oracle against Google about Java, but Google won.


    Again every compiler does this already. When you download software from the internet, it works on your machine right? It not compiled for you specific processor, but just x86.

    As for one binary for every architecture, (like x86, ARM, PPC, MIPS), that can be done with GCC. GCC supports ARM and x86, so a port is just a recompile right? Wrong, many programs use bits of assembly for very intensive parts (many games and graphics engines). Also it isn't always as easy a just a recompile.

    What you are describing is an interpreted language like python. The interpreter has been ported to many architectures and bytecode will run on the interpeter the same way as if it was on x86 or ARM. Compiling C++ like this is stupid as it defeats the whole purpose. Interpreters and VMs kill performance.
    Actually what you describe just produces bytecode for a VM. There are no binaries.
    exactly right, im still looping "what the F" in my head when i readed the OP message but i think you explained this well enough

  5. #15
    Join Date
    Jan 2009
    Posts
    451

    Default

    Quote Originally Posted by n3wu53r View Post
    As for one binary for every architecture, (like x86, ARM, PPC, MIPS), that can be done with GCC. GCC supports ARM and x86, so a port is just a recompile right? Wrong, many programs use bits of assembly for very intensive parts (many games and graphics engines). Also it isn't always as easy a just a recompile.
    I have found that most software is written entirely in C/CPP/etc, and platform specific assembly is written during the optimization phase. This does not typically prevent a recompilation for a different target arch, but the resulting binary is usually much, much slower, as arch specific optimizations are no longer utilized. I believe ffmpeg to be a good example of this.

    The spirit of your post is entirely correct though. It is often not a simple matter of specifying a new arch.


    F

  6. #16
    Join Date
    Dec 2007
    Posts
    103

    Default

    Quote Originally Posted by n3wu53r View Post
    LLVM does not compile to a VM or interpreter. It is a compiler that compiles to machine code just like GCC.
    People do not target one processor with one binary, with GCC you can download a binary that works on every processor. I think you mean architecture. Such as x86 and ARM. There are many x86 processors, some from intel, some from AMD, but one x86 binary works on all chips (unless you use processor specific CFLAGS but nobody does that in distributed binaries).

    As for patents against android, that is not related to Java. There was a law suit from Oracle against Google about Java, but Google won.


    Again every compiler does this already. When you download software from the internet, it works on your machine right? It not compiled for you specific processor, but just x86.

    As for one binary for every architecture, (like x86, ARM, PPC, MIPS), that can be done with GCC. GCC supports ARM and x86, so a port is just a recompile right? Wrong, many programs use bits of assembly for very intensive parts (many games and graphics engines). Also it isn't always as easy a just a recompile.

    What you are describing is an interpreted language like python. The interpreter has been ported to many architectures and bytecode will run on the interpeter the same way as if it was on x86 or ARM. Compiling C++ like this is stupid as it defeats the whole purpose. Interpreters and VMs kill performance.
    Actually what you describe just produces bytecode for a VM. There are no binaries.
    In all fairness, LLVM can be used as a sort of VM by compiling programs to its IR and then having the compiler JIT compile the resulting binaries at runtime. In fact, I believe this is what Google is trying to achieve with PNaCl for Chrome.

  7. #17
    Join Date
    Aug 2012
    Posts
    5

    Default

    FreeBSD has not switched to use Clang by default just yet. dim@ added (and MFC'd) WITH_CLANG_IS_CC build option in 2012-03-01 (disabled by default) but that's about it. There is still a possibility official 10.0-RELEASE (around fall 2013?) would be built by GCC.
    Last edited by hisbug; 08-20-2012 at 02:44 PM.

  8. #18
    Join Date
    Nov 2007
    Posts
    1,024

    Default

    Quote Originally Posted by n3wu53r View Post
    LLVM does not compile to a VM or interpreter. It is a compiler that compiles to machine code just like GCC.
    While LLVM can target specific architectures as its final output, and is usually used in that fashion... yes, yes it DOES support compiling to target-neutral LLVM-IR and then running those with the LLVM-JIT and interpreter. LLVM-IR was explicitly deesigned for this use case originally. That is why it's called the Low Level Virtual Machine and not the Low Level Architecture Specific Code Generator.

    For example, Google is using this feature of LLVM for their PNaCl (Portable Native Client) subproject of NaCl. You can compile your C/C++ code to LLVM-IR and distribute it as a NaCl app, which is then JIT'd for the architecture the user is on. It allows shipping a single binary that works on x86, ARM, etc. Currently, NaCl are actual binaries, and so if the developer only compiles an x86 version it can't run on phones or any future architectures that pop up down the road. Unlike a traditional JIT optimizer, LLVM-IR alos allows doing a full optimization pass set (the exact same as if you had compiled directly to machine code in the first place), so the optimized binaries can be cached and reused, allowing a much greater level of optimization from a target-neutral VM bitcode than you can get with the usual JavaScript/JVM/.NET/etc runtimes.

    There was also some bruhaha years back (before Clang, when using LLVM with C/C++ meant using the llvm-gcc frontend) of having entire distros ship packages as LLVM-IR, and then compile them at install time, allowing a single package repo to install to any supported architecture. Obviously was a bad idea in several ways and never happened, and today is no longer even that interesting since there's all of 3 archictures used by common desktop-ish distros, but it was a thing that had a lot interest back in the day.

    There are of course limitations to all this, especially as a lot of low-level C/C++ code makes assumptions about an architecture at compile-time that the VM cannot magically fix, but in the common cases it all works perfectly fine.

  9. #19
    Join Date
    Oct 2008
    Posts
    2,908

    Default

    Quote Originally Posted by elanthis View Post
    Unlike a traditional JIT optimizer, LLVM-IR alos allows doing a full optimization pass set (the exact same as if you had compiled directly to machine code in the first place), so the optimized binaries can be cached and reused, allowing a much greater level of optimization from a target-neutral VM bitcode than you can get with the usual JavaScript/JVM/.NET/etc runtimes.
    Just to clarify, that's exactly what .NET/Mono allow you to do as well.

  10. #20
    Join Date
    Jun 2011
    Posts
    315

    Default

    Debian is *NOT* planning on moving to LLVM/Clang anytime soon. Last I heard at the latest DebConf there were only a few people playing around with LLVM/Clang and there was no serious push to make it work. The work that they were doing had more to do with finding and filing bugs against LLVM/Clang because it simply wasn't compiling some software. They were simply using the Debian archive to catch the low-hanging fruit to easily find problems with LLVM/Clang There are still a lot of packages that don't even compile with LLVM/Clang and unknown amount of others that do compile, but compile into something that simply doesn't work properly.

    I believe they mentioned something about some packages using -O9 by default, which for some reason works in GCC even though -O9 doesn't exist, but it causes LLVM/Clang to refuse to compile the package or to pop a warning which then gets upgraded to an error because they're trying to debug LLVM/Clang with strict compile rules. Some packages are using O9 because they want to always use all the latest accelerations possible and if the software doesn't work anymore, then it would be assumed to be a bug with GCC. I think it was also mentioned that the hard coded -O9 flags were causing some problems with multi-arch as well.

    If you want to know more about Debian's progress with LLVM/Clang, watch the videos from the latest DebConf on the topic.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •