Announcement

Collapse
No announcement yet.

Intel Linux Graphics Developers No Longer Like LLVM

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel Linux Graphics Developers No Longer Like LLVM

    Phoronix: Intel Linux Graphics Developers No Longer Like LLVM

    Well, it turns out the open-source Intel Linux graphics driver developers are no longer interested in having an LLVM back-end for their graphics driver...

    http://www.phoronix.com/vr.php?view=MTU5NjQ

  • #2
    When someone describes their coding as table flipping moments, you know they really have done their best to like it.

    It would be nice to here their specific gripes so others are cautious about LLVM.

    Comment


    • #3
      Originally posted by e8hffff View Post
      It would be nice to here their specific gripes so others are cautious about LLVM.
      where is +1 button ?
      or, considering that amd guys did use llvm successfully, it can turn out to be not llvm's fault. but that sill would be nice to hear
      Last edited by pal666; 02-08-2014, 02:45 AM.

      Comment


      • #4
        I can sort of understand this, even though I don't know LLVM well. Just observing the public development is quite interesting.

        AMD has been working on LLVM backends for r600 and radeonsi hardware for a *long* time, since late 2011, with various contributors working on it. If we look at the current situation, it's unfortunately still not that good or usable: code generation quality is mediocre, there are many bugs, and there are some yet to be solved principal issues (performance, error handling). r600-class hardware support isn't even feature complete. LLVM seems like an incredibly hard to tame beast, at least for typical GPU architectures and usage as part of a graphics API.

        On the other hand, Vadim has single-handedly implemented a much superior r600 compiler backend from scratch. It is much more reliable and produces better code, yet it seems to be simpler than the LLVM backend, and has less lines of code.

        Moreover, custom from-scratch backends have also worked rather well for other Gallium drivers, like nouveau or freedreno.

        Comment


        • #5
          Originally posted by pal666 View Post
          where is +1 button ?
          or, considering that amd guys did use llvm successfully, it can turn out to be not llvm's fault. but that sill would be nice to hear
          Half asleep here. Obviously I meant 'hear'.

          Comment


          • #6
            Intel does not use Gallium3D too... funny thing.

            Maybe the problem is that every Intel graphic chip is totally different. So, writing a LLVM backend for every chip is not really useful. Who knows...

            Comment


            • #7
              Originally posted by -MacNuke- View Post
              Intel does not use Gallium3D too... funny thing. Maybe the problem is that every Intel graphic chip is totally different. So, writing a LLVM backend for every chip is not really useful. Who knows...
              As you probably know floating points are used a lot in graphic actions, so they are probably hitting the wall regarding controlling the units and interfacing libraries with stubs and thunks, etc.

              Comment


              • #8
                Originally posted by brent View Post
                I can sort of understand this, even though I don't know LLVM well. Just observing the public development is quite interesting.

                AMD has been working on LLVM backends for r600 and radeonsi hardware for a *long* time, since late 2011, with various contributors working on it. If we look at the current situation, it's unfortunately still not that good or usable: code generation quality is mediocre, there are many bugs, and there are some yet to be solved principal issues (performance, error handling). r600-class hardware support isn't even feature complete. LLVM seems like an incredibly hard to tame beast, at least for typical GPU architectures and usage as part of a graphics API.

                On the other hand, Vadim has single-handedly implemented a much superior r600 compiler backend from scratch. It is much more reliable and produces better code, yet it seems to be simpler than the LLVM backend, and has less lines of code.

                Moreover, custom from-scratch backends have also worked rather well for other Gallium drivers, like nouveau or freedreno.
                Reading the thread speaks volumes about Intel staff vs. AMD staff.

                Perhaps Mesa wants the LLVM/Clang project to come in and clean up the code? Or have AMD completely take over the project? Seriously, piss poor code behavior is a product of the developer, not a metric of the tools used.

                Or perhaps Intel is pissed-off Apple won't lift a finger in helping them in their shader compiler work? Who cares? They've got the money and seeing as they have done a half-ass job with their OpenMP 3.1 being ready, and no actual work since the only code dump several months back, I doubt anyone in the LLVM/Clang greater community is going to run to their aid or defense, other than some GCC advocate ready to bend over backwards to swallow Psi into the GCC codebase that Intel couldn't convince anyone in the LLVM/Clang group to just accept.
                Last edited by Marc Driftmeyer; 02-08-2014, 03:57 AM.

                Comment


                • #9
                  Originally posted by Marc Driftmeyer View Post
                  Reading the thread speaks volumes about Intel staff vs. AMD staff.

                  Perhaps Mesa wants the LLVM/Clang project to come in and clean up the code? Or have AMD completely take over the project? Seriously, piss poor code behavior is a product of the developer, not a metric of the tools used.

                  Or perhaps Intel is pissed-off Apple won't lift a finger in helping them in their shader compiler work? Who cares? They've got the money and seeing as they have done a half-ass job with their OpenMP 3.1 being ready, and no actual work since the only code dump several months back, I doubt anyone in the LLVM/Clang greater community is going to run to their aid or defense, other than some GCC advocate ready to bend over backwards to swallow Psi into the GCC codebase that Intel couldn't convince anyone in the LLVM/Clang group to just accept.
                  Why did you completely ignore the fact LLVM still works badly for AMD after years of work?

                  Comment


                  • #10
                    Originally posted by Marc Driftmeyer View Post
                    Reading the thread speaks volumes about Intel staff vs. AMD staff.

                    Perhaps Mesa wants the LLVM/Clang project to come in and clean up the code? Or have AMD completely take over the project? Seriously, piss poor code behavior is a product of the developer, not a metric of the tools used.

                    Or perhaps Intel is pissed-off Apple won't lift a finger in helping them in their shader compiler work? Who cares? They've got the money and seeing as they have done a half-ass job with their OpenMP 3.1 being ready, and no actual work since the only code dump several months back, I doubt anyone in the LLVM/Clang greater community is going to run to their aid or defense, other than some GCC advocate ready to bend over backwards to swallow Psi into the GCC codebase that Intel couldn't convince anyone in the LLVM/Clang group to just accept.
                    Did you consider the possibility that LLVM is just not the right tool for the job?

                    Comment


                    • #11
                      Originally posted by Temar View Post
                      Did you consider the possibility that LLVM is just not the right tool for the job?
                      Nah. The mystic almighty god LLVM is the best in all the worlds in the universe that can do anything from compiling to making you a coffee.

                      To the actual point, LLVM is a compiler backend and it should be correct tool for the job. At least it should become the correct tool with little modification. People say LLVM is not that of a good thing as advertised so I guess this is what they are talking about, LLVM is not as flexible as it should have been I guess. Clang seems to have passed GCC in benchmarks though.

                      Comment


                      • #12
                        This feels like Unladen Swallow, another project that had high hopes and a team of Googlers working on it, to be given up in disgust. It was meant to speed up the Python interpreter (CPython) with an LLVM-based JIT. The conclusion was that LLVM is too inflexible to perform well outside of ahead of time compilers.

                        Comment


                        • #13
                          Originally posted by the303 View Post
                          To the actual point, LLVM is a compiler backend and it should be correct tool for the job. At least it should become the correct tool with little modification.
                          However, LLVM was designed for general purpose CPUs. GPUs have very different architectures. So there are more than "a little" modifications required.

                          Comment


                          • #14
                            Originally posted by brent View Post
                            However, LLVM was designed for general purpose CPUs. GPUs have very different architectures. So there are more than "a little" modifications required.
                            GPUs can be seen as a subset of CPUs that are specialized in floating point operations. So there should be a little modification to maybe get it more easy to work on, if LLVM was a properly designed platform. It's not like GPUs use quantum computing techniques or anything, it is just a processor.

                            Even if it required more than a little work, it shouldn't take years and still perform bad. (See AMD)

                            Like they said, apparently it is not designed for that. But then again, what is the point of a design that is based on a IR model if it cannot be used in a very flexible way?
                            Last edited by the303; 02-08-2014, 06:40 AM.

                            Comment


                            • #15
                              Originally posted by the303 View Post
                              Even if it required more than a little work, it shouldn't take years and still perform bad. (See AMD)

                              Like they said, apparently it is not designed for that. But then again, what is the point of a design that is based on a IR model if it cannot be used in a very flexible way?
                              The IR is fine, but the optimization passes apparently aren't very suitable for GPUs.

                              Comment

                              Working...
                              X