Announcement

Collapse
No announcement yet.

Intel Linux Graphics Developers No Longer Like LLVM

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    LLVM sucks

    Having recently spent some time trying to get lljvm working again, I can only echo the general sentiment. LLVM sucks. Every point release of the code introduces numerous incompatible API changes, and some APIs are withdrawn with no equivalent replacement being provided. llvm is clearly an immature project thrown together by people who have no clue about software engineering or design. For it to still be so immature after so many years is pretty conclusive proof of that.

    (My lljvm tree, for reference https://github.com/hyc/lljvm/tree/llvm3.3)

    Having spent a few days beating this into submission for llvm 3.3, still with missing functionality, I just don't have the will to update it again for llvm 3.4. Now I understand why the original lljvm author abandoned the project. Nobody in their right mind would use this piece of garbage.
    Last edited by highlandsun; 02-08-2014, 09:03 AM.

    Comment


    • #22
      Originally posted by the303 View Post
      Nah. The mystic almighty god LLVM is the best in all the worlds in the universe that can do anything from compiling to making you a coffee.

      To the actual point, LLVM is a compiler backend and it should be correct tool for the job. At least it should become the correct tool with little modification. People say LLVM is not that of a good thing as advertised so I guess this is what they are talking about, LLVM is not as flexible as it should have been I guess. Clang seems to have passed GCC in benchmarks though.
      I can tell you LLVM ISN'T
      Lowering code of accessing local shared memory (aka LDS) from LLVM IR to R600 machine code is a pain in the ass.
      Something won't become universal because it advertised itself as universal.

      Comment


      • #23
        Originally posted by the303 View Post
        I am an electrical engineer so I have always been irritated by the idea of JIT/AOT instead of static compiling. It makes me wanna puke because it is pointless and waste of resources. The machines may be powerful enough for end user not to notice the overhead but on large scale (all computers over the world), it will cause some considerable energy loss and for what? Just because a developer was too lazy to pre-compile his/her program for X platform and he would rather every PC to compile the same thing on the fly each and every time they run their program in the case of JIT and compile once in the case of AOT, as if there were a thousand platforms to target (even that could be automated).
        But that's not LLVM's fault, the use of JIT/AOT compiling with shaders is how they are designed, if I understand it correctly. LLVM can be used for static compiling, CLang is an example of it, and AFAIK that's how it's most widely used. I'm against JIT the same as you are, although there is a single case where I don't consider it just laziness, which is to get machine specific optimizations. Still, it's true that you could just automate all of the possible (relevant) builds and make them once before deployment.

        Comment


        • #24
          Originally posted by mrugiero View Post
          Still, it's true that you could just automate all of the possible (relevant) builds and make them once before deployment.
          Until you need to run on hardware that wasn't released at the time that you shipped your code... Gpu's, unlike CPUs do not share a common instruction set that allows broad forward compatibility

          Comment


          • #25
            Originally posted by Veerappan View Post
            Until you need to run on hardware that wasn't released at the time that you shipped your code... Gpu's, unlike CPUs do not share a common instruction set that allows broad forward compatibility
            He told that to general purpose executables referring to my post. You are right the frequent changes in GPU architectures and the need to get the maximum out of the GPU causes the shader to be JIT compiled.

            Originally posted by mrugiero View Post
            But that's not LLVM's fault, the use of JIT/AOT compiling with shaders is how they are designed, if I understand it correctly. LLVM can be used for static compiling, CLang is an example of it, and AFAIK that's how it's most widely used. I'm against JIT the same as you are, although there is a single case where I don't consider it just laziness, which is to get machine specific optimizations. Still, it's true that you could just automate all of the possible (relevant) builds and make them once before deployment.
            You are right shaders work that way LLVM or not. My statement was more general. People seem to advocate LLVM because it allows JIT usage like it is such a great and needed thing (although there is a better alternatives). Also right about the one JIT usage (applies to AOT also) advantage about processor specific optimizations. One alternative to it, like you said you could avoid it with automated build of variants. There is a better way though that is compiling all variants in one single binary. Code execution takes different paths checking the CPU inst. set extensions at runtime. (AFAIK gcc libc does that with strlen - SSE2. Also, x264 (H.264 encoder) also checks for supported CPU ins. set extensions on program launch and acts accordingly). The increase in binary size if all variants are generated for all methods is subject to debate though. Yet processor speeds aren't increasing and the little increases are way expensive and HDDs & RAMs are relatively cheap. I would rather maximize the processor performance without JIT AOT and other overheads to executable footprint any day.

            I would like to have a gcc flag that creates -minstructionsetext=SSE2,AVX, etc. variants instead of -march or LLVM's IR JIT. That should have what all this manpower been gone to instead of creating LLVM that does not even have genuine way to represent logical operations which will easily be mapped and applied to any processor's instruction set / architecture. I mean I see this as the one thing that LLVM should have shined, but instead I see it sucks horribly.

            I never understood this another thing that is used to advertise "having LLVM is good for sanity check" thing. The C++ standard is out there and if the compiler does not behave accordingly it will be obvious. If a code that is not supposed to work, works using X compiler, then it is most likely to be same on Y compiler (like accessing stack variable that is supposedly be destructed by returning its address). Were all these man-hours really worth it? Aside from this sanity check idea, LLVM/Clang is basically the same thing as GCC as the JIT should not be used at all when there is already a better option I said above. It's true that LLVM allowed things like emscripten but javascript is another horror story for another time which is hopelessly being kept alive like asm.js and such.
            Last edited by the303; 02-08-2014, 02:10 PM.

            Comment


            • #26
              Originally posted by Veerappan View Post
              Until you need to run on hardware that wasn't released at the time that you shipped your code... Gpu's, unlike CPUs do not share a common instruction set that allows broad forward compatibility
              I wasn't referring to GPUs, as they change too often. Sorry if I didn't make that clear enough in my previous post, I though it was implicit in the part about me believing (BTW, is it that way or I'm wrong?) that shaders are designed to be built at runtime. But in CPUs, generally if the new platform is different enough that another build will not run, I believe JIT will not run either because of previous assumptions made about the architectures, even in the IR probably, maybe the OS and other dependencies also. I mean, it has to be as different as, e.g., PowerPC from x86. Otherwise, at least your simple build will offer a subset of what the new architecture offers.

              EDIT: Just to be clear, when I say "new hardware" I mean chronologically new, as in your example.

              Comment


              • #27
                Originally posted by the303 View Post
                There is a better way though that is compiling all variants in one single binary. Code execution takes different paths checking the CPU inst. set extensions at runtime. (AFAIK gcc libc does that with strlen - SSE2. Also, x264 (H.264 encoder) also checks for supported CPU ins. set extensions on program launch and acts accordingly). The increase in binary size if all variants are generated for all methods is subject to debate though. Yet processor speeds aren't increasing and the little increases are way expensive and HDDs & RAMs are relatively cheap. I would rather maximize the processor performance without JIT AOT and other overheads to executable footprint any day.
                I've read about that, although I didn't recall until you mentioned it. At least, I think I did, if it's called code dispatching. But it also leads to increased binary size and (maybe?) more cache misses, so I'd rather use variants. Also, according to Agner's book, code dispatching is harder to write, my guess is you need to explicitly write the runtime checks? Or maybe a compiler could do that for you, the way C++ compilers implicitly put the type checking?

                I never understood this another thing that is used to advertise "having LLVM is good for sanity check" thing. The C++ standard is out there and if the compiler does not behave accordingly it will be obvious. If a code that is not supposed to work, works using X compiler, then it is most likely to be same on Y compiler (like accessing stack variable that is supposedly be destructed by returning its address).
                Isn't sanity check just a debugging help? Like a less powerful but faster than Valgrind approach to fix memory issues and such? If I understand it correctly, it mostly puts the instrumentation there for you or something like that.

                Comment


                • #28
                  Originally posted by mrugiero View Post
                  I've read about that, although I didn't recall until you mentioned it. At least, I think I did, if it's called code dispatching. But it also leads to increased binary size and (maybe?) more cache misses, so I'd rather use variants. Also, according to Agner's book, code dispatching is harder to write, my guess is you need to explicitly write the runtime checks? Or maybe a compiler could do that for you, the way C++ compilers implicitly put the type checking?


                  Isn't sanity check just a debugging help? Like a less powerful but faster than Valgrind approach to fix memory issues and such? If I understand it correctly, it mostly puts the instrumentation there for you or something like that.
                  Yes that is why I am saying all the manpower should have gone to writing such a feature that compiler would auto-generate runtime-checks and create different code paths. Right now you have to do the checking manually (AFAIK there aren't any compiler intrinsics for that so) in assembly and call different functions that are also written in assembly or using specific compiler intrinsics for SSE2 and such if you want your code be fast with backwards compatibility. I don't think there would be many cache misses because execution would always follow the same code path anyway. Other branches would never be taken.

                  As for sanity check, I know it as this: Having the code compile on different compilers and check if it behaves correctly on both so that you know you aren't doing some weird mistake that actually works but -may or may not work- in the future or -may or may not- work depending the platform. The thing you said sounds more like the new flag that clang introduced which is a more advanced version of -fstack-protector if I remember right.
                  Last edited by the303; 02-08-2014, 02:46 PM.

                  Comment


                  • #29
                    Originally posted by highlandsun View Post
                    Having recently spent some time trying to get lljvm working again, I can only echo the general sentiment. LLVM sucks. Every point release of the code introduces numerous incompatible API changes, and some APIs are withdrawn with no equivalent replacement being provided. llvm is clearly an immature project thrown together by people who have no clue about software engineering or design. For it to still be so immature after so many years is pretty conclusive proof of that.

                    (My lljvm tree, for reference https://github.com/hyc/lljvm/tree/llvm3.3)

                    Having spent a few days beating this into submission for llvm 3.3, still with missing functionality, I just don't have the will to update it again for llvm 3.4. Now I understand why the original lljvm author abandoned the project. Nobody in their right mind would use this piece of garbage.
                    It's true that out-of-tree development is a pain if you're not tracking tip as there is no stable C++ API. I don't think LLVM is the only open-source project that does this.

                    I think that's exactly the issue that Ian was talking about in the FOSDEM talk about the new Mesa IR. If you skip the video to the question part at the end, he says that the reasons for not using LLVM in Mesa is a versioning problem, not a technical one.

                    Comment


                    • #30
                      Originally posted by curaga View Post
                      Why did you completely ignore the fact LLVM still works badly for AMD after years of work?
                      If it is working badly for AMD then they have trouble ahead because they are basing some of their GSA architecture on a similar product.

                      To look at this another way AMD has so few developers working on this, that if anything their current success is pretty remarkable. Think about it, if Intel and all their resources is having problem with the same tools AMD and Apple uses, where is the problem?

                      Comment

                      Working...
                      X