Announcement

Collapse
No announcement yet.

Richard Stallman Calls LLVM A "Terrible Setback"

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by aphirst View Post
    The fear is that if LLVM were dominant, in 10 years' time the public version would only be able to perform very naive optimisations on the then all-but-secret architectures of AMD/Intel/Arm/etc., who provide their own proprietary fork of LLVM which optimises properly for their CPU type.
    And the difference between this scenario and the world right now is what? GCC is far away from being the reference compiler for any architecture. There are vendor specific compilers from IBM and Intel and both generating better code than GCC. So LLVM does not change a thing.

    Comment


    • Originally posted by aphirst View Post
      those who wanted to live their lives using only Free software.
      This is already nothing more than fantasy since the early 2000s.

      The kernel ships with tiny pieces of x86 and x64 microcode blobs to support the processors used by the world's computers today. Not free.

      That motherboard with a BIOS or UEFI? Not free.

      Using a graphics card? Firmware blobs in kernel. Not free.

      Using WiFi? UNLESS that is an Atheros card with support for only up to 80211n, every other card in existence today requires firmware blobs in the kernel. Not free. Atheros has also confirmed that their 80211ac cards will require firmware blobs. And they are the most Linux-friendly manufacturer of wireless chipsets.

      Comment


      • Originally posted by Sonadow View Post
        This is already nothing more than fantasy since the early 2000s.

        The kernel ships with tiny pieces of x86 and x64 microcode blobs to support the processors used by the world's computers today. Not free.

        That motherboard with a BIOS or UEFI? Not free.

        Using a graphics card? Firmware blobs in kernel. Not free.

        Using WiFi? UNLESS that is an Atheros card with support for only up to 80211n, every other card in existence today requires firmware blobs in the kernel. Not free. Atheros has also confirmed that their 80211ac cards will require firmware blobs. And they are the most Linux-friendly manufacturer of wireless chipsets.
        So your response is that because companies are already taking the piss to some extent, it's therefore totally fine to make it even easier for them to do it with something else incredibly important and central? Nice job, mate. We're all very proud.

        Comment


        • Sadly, your nick is "Truth" and you're making exaggerated claims about GPL and RMS.
          You're a sad troll and you should feel bad.

          Comment


          • You won at the moment I answered you.
            Cheers.

            Comment


            • Originally posted by frostwarrior View Post
              You won at the moment I answered you.
              Cheers.
              I'm glad I was able to free your mind of it's FSF shackles.

              Embrace freedom.

              Comment


              • The fact that LLVM is released under such a non-restrictive license allows for more uses of it and more contributions to it. That's part of the reason it has gained so much ground so fast on GCC. With the FSF's model, innovation just stagnates.

                Comment


                • Originally posted by Sonadow View Post
                  This is already nothing more than fantasy since the early 2000s.

                  The kernel ships with tiny pieces of x86 and x64 microcode blobs to support the processors used by the world's computers today. Not free.

                  That motherboard with a BIOS or UEFI? Not free.

                  Using a graphics card? Firmware blobs in kernel. Not free.

                  Using WiFi? UNLESS that is an Atheros card with support for only up to 80211n, every other card in existence today requires firmware blobs in the kernel. Not free. Atheros has also confirmed that their 80211ac cards will require firmware blobs. And they are the most Linux-friendly manufacturer of wireless chipsets.
                  Simply use a Sun Microsystems computer (that is, before Oracle eclipsed the Sun). Running on Open Bios, the SPARC CPU is open-source, framebuffers are available that are free of blobs...

                  Comment


                  • Originally posted by Zan Lynx View Post
                    I don't think that you have actually used LLVM?

                    Sure, it can be used in the way that you describe. But it can equally be used with the next stage, which is to compile to machine code. This is how Apple uses it. Their LLVM compiler used in Xcode does not produce files to JIT later. It produces binaries containing x86_64 machine code, x86 machine code, and/or ARM machine code.

                    Using JIT every time the code runs is almost always a waste of time. Why do it when you can do it one time and be done?


                    I am sorry, no offense, but you don't really know what you are talking about. 4 rules: a)A 10% of the code of the program is responsible for 90% of the performance and the opposite. So GCC optimizes only the 10% having it multiple times compiled for many subsets (like sse4.2 or avx), the rest 90% its a simple i686, that's the main reason that LLVM has binaries half the size. LLVM does win a 10%+ from this. b)Some times the best optimization can be done only on runtime, +1 for LLVM and Jit. c)LLVM does the heavy part of optimizations pre-git, having bytecode extensions for many different CPU_cultures, no importance if the last part is Jit. d)When you compile for Jit you can succeed using only a 10% from your CPU for the last (Jit) part, that lose is less than the gain. It is proven that a C11+LLVM(Jit) program or a Java program can be as fast as a C++ GCC program.

                    For all the rest i believe that between a good world (GPL) and a bad world (closed, patented), the best point is the bridge (BSD). So a small 5% of the free code must be BSD. I find difficult for someone to create a closed LLVM incompatible with the open one. First because the reason to choose LLVM is the high compatibility and second even if they do, they can easily tracked by as, we can keep following them more easily than a compatibility layer.

                    Comment


                    • Originally posted by artivision View Post
                      I am sorry, no offense, but you don't really know what you are talking about. 4 rules: a)A 10% of the code of the program is responsible for 90% of the performance and the opposite. So GCC optimizes only the 10% having it multiple times compiled for many subsets (like sse4.2 or avx), the rest 90% its a simple i686, that's the main reason that LLVM has binaries half the size. LLVM does win a 10%+ from this. b)Some times the best optimization can be done only on runtime, +1 for LLVM and Jit. c)LLVM does the heavy part of optimizations pre-git, having bytecode extensions for many different CPU_cultures, no importance if the last part is Jit. d)When you compile for Jit you can succeed using only a 10% from your CPU for the last (Jit) part, that lose is less than the gain. It is proven that a C11+LLVM(Jit) program or a Java program can be as fast as a C++ GCC program.
                      And you don't know what you talk about either, it seems. First things first, JIT compilers basically do as you say. Yay for it. However, if you want software to be optimized to the actual computer it runs on, you could as well build it locally once and be done with it, instead of building every time it runs, which increases the startup time (note, this is probably partly solved by caching the binaries generated; I don't know enough of the subject to know if actual JIT-compilers do that, but it sounds sensible enough to assume most of them do); after all, optimization is usually the most expensive part of compiling, and you only get to avoid repeating the lexing and parsing when you JIT, maybe some parts of the optimization, but the more you do before deployment is the less you can do specifically for the machine running it. On the other hand, you are TERRIBLY AWFULLY WRONG when you assume LLVM implies JIT, or that there are no GCC JIT, although the latter I acknowledge it's more an experiment than a production quality thing. Maybe we read you wrong, but in all of your posts you seem to imply that LLVM implies JIT compilation, which is not the case, really. I can't stress that enough. I can give you a statically compiled binary and you will not be able to run it in other architectures, nor to optimize it locally, because it's not JIT, while being done with LLVM (through CLANG).

                      Comment

                      Working...
                      X