Announcement

Collapse
No announcement yet.

Richard Stallman Calls LLVM A "Terrible Setback"

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Marc Driftmeyer
    replied
    Originally posted by brosis View Post
    Because LLVM major player is Apple.
    Apple invented LLVM after GCC changed license to GPL3. The tivoization clause was critical.

    Apple explicitly wanted to be able to lock-down all its technologies and products by technological measure.
    They stayed with outdated GCC version which was still licensed under GPL2, then they started to rewrite a LLVM replacement.

    Stallman sees free software as software from which proprietary companies may not rip off.
    Apple wants to explicitly rip off of the software, hence insisting heavily on absent freedom protections.

    There is simply no other explanations.

    Those who try derail thread into different definitions of freedom are big jokes. There is freedom, which requires recursive protection of itself. And there is anarchy.

    Up to the developer to decide which to pick. Up to user to decide which to support. This way or other, "opensource" would emerge.

    I am with Stallman on this one.
    LLVM comes from the University of Illinois as Lattner, his wife and other key architects were Ph.D students creating it.

    Apple invented Clang to support C/ObjC/ObjC++/C++ free from GCC and is the biggest financial supporter.

    Meanwhile, Google, AMD, ARM, SONY, IBM, Oracle, NVidia, CRAY, Adobe, etc., etc., are all contributing to the entire LLVM/Clang/LLDB/Compiler-RT, etc., family of projects.

    The LLVM License is quite clear:

    http://llvm.org/docs/FAQ.html#license

    http://opensource.org/licenses/UoI-NCSA.php

    Leave a comment:


  • brosis
    replied
    Originally posted by Truth View Post
    That does not support the assertion.
    No, that proves that you are dumb or blind or both.

    It supports the assertion, the layer exists on ANY CELL PHONE, it has FULL ACCESS FROM CARRIER, it supports DIRECT WRITES INTO MEMORY and DIRECT EXECUTION, it is FULL OF BUGS, it is PROPRIETARY, it is written with NSA SUPPORT, it is the ONLY REASON why mobile devices are HACKED FOR TRACKING USE. That all is stated in the article.

    If the codebase is BSD, then the whole layer WILL BE PROPRIETARY and there is no way to change that.

    Leave a comment:


  • mrugiero
    replied
    Originally posted by artivision View Post
    I am sorry, no offense, but you don't really know what you are talking about. 4 rules: a)A 10% of the code of the program is responsible for 90% of the performance and the opposite. So GCC optimizes only the 10% having it multiple times compiled for many subsets (like sse4.2 or avx), the rest 90% its a simple i686, that's the main reason that LLVM has binaries half the size. LLVM does win a 10%+ from this. b)Some times the best optimization can be done only on runtime, +1 for LLVM and Jit. c)LLVM does the heavy part of optimizations pre-git, having bytecode extensions for many different CPU_cultures, no importance if the last part is Jit. d)When you compile for Jit you can succeed using only a 10% from your CPU for the last (Jit) part, that lose is less than the gain. It is proven that a C11+LLVM(Jit) program or a Java program can be as fast as a C++ GCC program.
    And you don't know what you talk about either, it seems. First things first, JIT compilers basically do as you say. Yay for it. However, if you want software to be optimized to the actual computer it runs on, you could as well build it locally once and be done with it, instead of building every time it runs, which increases the startup time (note, this is probably partly solved by caching the binaries generated; I don't know enough of the subject to know if actual JIT-compilers do that, but it sounds sensible enough to assume most of them do); after all, optimization is usually the most expensive part of compiling, and you only get to avoid repeating the lexing and parsing when you JIT, maybe some parts of the optimization, but the more you do before deployment is the less you can do specifically for the machine running it. On the other hand, you are TERRIBLY AWFULLY WRONG when you assume LLVM implies JIT, or that there are no GCC JIT, although the latter I acknowledge it's more an experiment than a production quality thing. Maybe we read you wrong, but in all of your posts you seem to imply that LLVM implies JIT compilation, which is not the case, really. I can't stress that enough. I can give you a statically compiled binary and you will not be able to run it in other architectures, nor to optimize it locally, because it's not JIT, while being done with LLVM (through CLANG).

    Leave a comment:


  • artivision
    replied
    Originally posted by Zan Lynx View Post
    I don't think that you have actually used LLVM?

    Sure, it can be used in the way that you describe. But it can equally be used with the next stage, which is to compile to machine code. This is how Apple uses it. Their LLVM compiler used in Xcode does not produce files to JIT later. It produces binaries containing x86_64 machine code, x86 machine code, and/or ARM machine code.

    Using JIT every time the code runs is almost always a waste of time. Why do it when you can do it one time and be done?


    I am sorry, no offense, but you don't really know what you are talking about. 4 rules: a)A 10% of the code of the program is responsible for 90% of the performance and the opposite. So GCC optimizes only the 10% having it multiple times compiled for many subsets (like sse4.2 or avx), the rest 90% its a simple i686, that's the main reason that LLVM has binaries half the size. LLVM does win a 10%+ from this. b)Some times the best optimization can be done only on runtime, +1 for LLVM and Jit. c)LLVM does the heavy part of optimizations pre-git, having bytecode extensions for many different CPU_cultures, no importance if the last part is Jit. d)When you compile for Jit you can succeed using only a 10% from your CPU for the last (Jit) part, that lose is less than the gain. It is proven that a C11+LLVM(Jit) program or a Java program can be as fast as a C++ GCC program.

    For all the rest i believe that between a good world (GPL) and a bad world (closed, patented), the best point is the bridge (BSD). So a small 5% of the free code must be BSD. I find difficult for someone to create a closed LLVM incompatible with the open one. First because the reason to choose LLVM is the high compatibility and second even if they do, they can easily tracked by as, we can keep following them more easily than a compatibility layer.

    Leave a comment:


  • JX8p
    replied
    Originally posted by Sonadow View Post
    This is already nothing more than fantasy since the early 2000s.

    The kernel ships with tiny pieces of x86 and x64 microcode blobs to support the processors used by the world's computers today. Not free.

    That motherboard with a BIOS or UEFI? Not free.

    Using a graphics card? Firmware blobs in kernel. Not free.

    Using WiFi? UNLESS that is an Atheros card with support for only up to 80211n, every other card in existence today requires firmware blobs in the kernel. Not free. Atheros has also confirmed that their 80211ac cards will require firmware blobs. And they are the most Linux-friendly manufacturer of wireless chipsets.
    Simply use a Sun Microsystems computer (that is, before Oracle eclipsed the Sun). Running on Open Bios, the SPARC CPU is open-source, framebuffers are available that are free of blobs...

    Leave a comment:


  • MWisBest
    replied
    The fact that LLVM is released under such a non-restrictive license allows for more uses of it and more contributions to it. That's part of the reason it has gained so much ground so fast on GCC. With the FSF's model, innovation just stagnates.

    Leave a comment:


  • Truth
    replied
    Originally posted by frostwarrior View Post
    You won at the moment I answered you.
    Cheers.
    I'm glad I was able to free your mind of it's FSF shackles.

    Embrace freedom.

    Leave a comment:


  • frostwarrior
    replied
    You won at the moment I answered you.
    Cheers.

    Leave a comment:


  • frostwarrior
    replied
    Sadly, your nick is "Truth" and you're making exaggerated claims about GPL and RMS.
    You're a sad troll and you should feel bad.

    Leave a comment:


  • aphirst
    replied
    Originally posted by Sonadow View Post
    This is already nothing more than fantasy since the early 2000s.

    The kernel ships with tiny pieces of x86 and x64 microcode blobs to support the processors used by the world's computers today. Not free.

    That motherboard with a BIOS or UEFI? Not free.

    Using a graphics card? Firmware blobs in kernel. Not free.

    Using WiFi? UNLESS that is an Atheros card with support for only up to 80211n, every other card in existence today requires firmware blobs in the kernel. Not free. Atheros has also confirmed that their 80211ac cards will require firmware blobs. And they are the most Linux-friendly manufacturer of wireless chipsets.
    So your response is that because companies are already taking the piss to some extent, it's therefore totally fine to make it even easier for them to do it with something else incredibly important and central? Nice job, mate. We're all very proud.

    Leave a comment:

Working...
X