Announcement

Collapse
No announcement yet.

Intel Linux Graphics Developers No Longer Like LLVM

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by Temar View Post
    Did you consider the possibility that LLVM is just not the right tool for the job?
    Like the post above if it isn't the right tool for the job AMD will be in a world of hurt soon as much of their. HSA tech is based around similar tools. It might come down to the tool being more than the users at Intel can handle. Considering the state of Intels drivers one has to suspect that the problem rests with Intel.

    Comment


    • #32
      Originally posted by Tobu View Post
      This feels like Unladen Swallow, another project that had high hopes and a team of Googlers working on it, to be given up in disgust. It was meant to speed up the Python interpreter (CPython) with an LLVM-based JIT. The conclusion was that LLVM is too inflexible to perform well outside of ahead of time compilers.
      Hmm I thought the developers simply didn't have time to devote to it. As it is LLVM has been under continual development so maybe the perceived flexibility issue is gone.

      Comment


      • #33
        Originally posted by the303 View Post
        That makes sense.

        My comments are of course biased with an anti-LLVM attitude.
        Yes we see!

        I am an electrical engineer so I have always been irritated by the idea of JIT/AOT instead of static compiling. It makes me wanna puke because it is pointless and waste of resources. The machines may be powerful enough for end user not to notice the overhead but on large scale (all computers over the world), it will cause some considerable energy loss and for what? Just because a developer was too lazy to pre-compile his/her program for X platform and he would rather every PC to compile the same thing on the fly each and every time they run their program in the case of JIT and compile once in the case of AOT, as if there were a thousand platforms to target (even that could be automated).
        I suspect you are missing an important point here, CPU architectures are basically static, especially in the case of i86. GPUs vary widely and frankly can change from one generation to another even with the same manufacture. A developer would be forced to compile his app with dozens of targeted object files for the various GPUs out there.
        On the subject:
        Well if they also wrote special optimization passes (if that is possible) they would not need to use LLVM at all I guess, I mean that will be almost everything they will need to create if they were to not use LLVM anyway?
        Maybe maybe not. Tweaking LLVM would be the smart move in my mind. In this case I'm not sure blaming LLVM is rational.

        Still IMHO this is an inflexibility that LLVM should not have had to make the project worth something.
        You make it sound like LLVM was design on purpose to not support Intels GPUs well. The entire LLVM /clang compiler suite isn't that old and in fact you can't even call it mature. So if the suite has issues with Intels GPUs it is probably just another developers challenge.
        I mean as far as I know Intel has an army of open source developers compared to other companies and still find it not worthwhile to integrate LLVM. That is a bad sign.
        Supposedly yes they have an army, but if that army isn't capable of leveraging LLVM or enhancing so it works better in this context what does that say about the developers? Frankly they sound like spoiled school children with their table flipping references. The reality is LLVM is an open source community project, no one will look out for your special needs other than you.
        There may be different CPU types in the future that have huge differences compared to today's. The IR of LLVM that they like to refer as "the Universal IR" will require a enormous amount of work (on the optimization passes or whatever) to make it optimal for those? Intel decided to create its own instead of integrating LLVM after all, how is creating something from scratch can be a better way to use such a "modular and great" platform called LLVM?
        We haven't seen their results yet have we, nor do we know what issues they are having with LLVM. There could be all sorts of un explained issues with LLVM or it could be an issue with Intel developers. I mean really table flipping moments isn't the most enlightening explanation to the decision to abandon LLVM. In the end you have companies that can work successfully with it and companies that can't. I still come back to the fact that if LLVM is that bad then AMD is in a world of hurt, probably Apple too.

        Comment


        • #34
          Originally posted by the303 View Post
          He told that to general purpose executables referring to my post. You are right the frequent changes in GPU architectures and the need to get the maximum out of the GPU causes the shader to be JIT compiled.
          Exactly, the number of GPU variants probably runs into the hundreds now.


          You are right shaders work that way LLVM or not. My statement was more general. People seem to advocate LLVM because it allows JIT usage like it is such a great and needed thing (although there is a better alternatives).
          Where it makes sense it is an awesome feature.
          Also right about the one JIT usage (applies to AOT also) advantage about processor specific optimizations. One alternative to it, like you said you could avoid it with automated build of variants. There is a better way though that is compiling all variants in one single binary.
          That might work for CPUs today but can you imagine that binary if it even supported a small number of GPUs.
          Code execution takes different paths checking the CPU inst. set extensions at runtime. (AFAIK gcc libc does that with strlen - SSE2. Also, x264 (H.264 encoder) also checks for supported CPU ins. set extensions on program launch and acts accordingly). The increase in binary size if all variants are generated for all methods is subject to debate though. Yet processor speeds aren't increasing and the little increases are way expensive and HDDs & RAMs are relatively cheap. I would rather maximize the processor performance without JIT AOT and other overheads to executable footprint any day.
          You can see some awfully good results from OpenCL at getting the CPU. Maybe that isn't JIT code but the fact remains code compiled for a specific processor is often faster and more compact than a more general solution.
          I would like to have a gcc flag that creates -minstructionsetext=SSE2,AVX, etc. variants instead of -march or LLVM's IR JIT. That should have what all this manpower been gone to instead of creating LLVM that does not even have genuine way to represent logical operations which will easily be mapped and applied to any processor's instruction set / architecture. I mean I see this as the one thing that LLVM should have shined, but instead I see it sucks horribly.
          If it is so horrible why has Apple switched over to LLVM in XCode? Can LLVM be improved - certainly and frankly it is being improved. However like all software you only realize its shortcomings after working with it a bit. In a nut shell if you aren't willing to be part of the development process you have no reason to be using LLVM.
          I never understood this another thing that is used to advertise "having LLVM is good for sanity check" thing. The C++ standard is out there and if the compiler does not behave accordingly it will be obvious.
          Yeah sure. By the way you moved a discussion about LLVM into one about a compiler CLang which are two different things. CLang is a very very good compiler considering its young age. Its diagnostic capabilities along with the static checker can be a very good sanity check.
          If a code that is not supposed to work, works using X compiler, then it is most likely to be same on Y compiler (like accessing stack variable that is supposedly be destructed by returning its address). Were all these man-hours really worth it? Aside from this sanity check idea, LLVM/Clang is basically the same thing as GCC as the JIT should not be used at all when there is already a better option I said above. It's true that LLVM allowed things like emscripten but javascript is another horror story for another time which is hopelessly being kept alive like asm.js and such.
          Now I get it any language that doesn't meet your standard of acceptability is a horror story. If LLVM / CLang is such a horror story then millions of developers must be having nightmares. By the way you can look at the LLVM side as the back end of the compiler chain that can generate per compiled code as well as any other compiler. In some cases the tool chain does much better.

          Comment


          • #35
            Originally posted by wizard69 View Post
            If it is so horrible why has Apple switched over to LLVM in XCode?
            Mistakes exist. I'm not of the opinion that LLVM is bad (actually, the opposite), but the fact Apple uses it means nothing to me, and it's just an authority argument.

            In a nut shell if you aren't willing to be part of the development process you have no reason to be using LLVM.
            There is a distinction between being a developer and a user and being both. For someone who praises Apple that much, I expect that to be known.

            Comment


            • #36
              Does fglrx use llvm too? I could swear I have seen some llvm errors in a bug report somewhere.

              edit: Maybe it's only opencl http://devgurus.amd.com/message/1286923
              Last edited by ChrisXY; 02-09-2014, 08:08 PM.

              Comment


              • #37
                Originally posted by e8hffff View Post
                When someone describes their coding as table flipping moments, you know they really have done their best to like it.

                It would be nice to here their specific gripes so others are cautious about LLVM.
                Ian Romanick did give some reasons at the end of his talk at fosdem. This is his talk.

                Comment


                • #38
                  Originally posted by jrdls View Post
                  Ian Romanick did give some reasons at the end of his talk at fosdem. This is his talk.
                  If somebody interested his answer about LLVM is at 44:20.

                  Comment


                  • #39
                    Originally posted by brent View Post
                    AMD has been working on LLVM backends for r600 and radeonsi hardware for a *long* time, since late 2011, with various contributors working on it. If we look at the current situation, it's unfortunately still not that good or usable: code generation quality is mediocre, there are many bugs, and there are some yet to be solved principal issues (performance, error handling). r600-class hardware support isn't even feature complete. LLVM seems like an incredibly hard to tame beast, at least for typical GPU architectures and usage as part of a graphics API.
                    You're overstating the effort put into the LLVM backend for r600g, at least as far as graphics is concerned. Most of the effort there has been for OpenCL support.

                    LLVM is working well for radeonsi.

                    Comment


                    • #40
                      Originally posted by jrdls View Post
                      Ian Romanick did give some reasons at the end of his talk at fosdem. This is his talk.
                      His reason about not "importing" llvm into mesa sounds strange. Isn't it that mesa already supports having the llvm libs compiled into it when not using --with-llvm-shared-libs?
                      I mean you have buildscripts for your distribution for mesa, couldn't you just clone your favorite llvm revision in that build script and compile it into mesa and so get an updated radeon driver on previous versions of fedoras? Sure, that's a much longer build then, but it should at least be easy...

                      Comment

                      Working...
                      X