Announcement

Collapse
No announcement yet.

Intel Linux Graphics Developers No Longer Like LLVM

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel Linux Graphics Developers No Longer Like LLVM

    Phoronix: Intel Linux Graphics Developers No Longer Like LLVM

    Well, it turns out the open-source Intel Linux graphics driver developers are no longer interested in having an LLVM back-end for their graphics driver...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    When someone describes their coding as table flipping moments, you know they really have done their best to like it.

    It would be nice to here their specific gripes so others are cautious about LLVM.

    Comment


    • #3
      Originally posted by e8hffff View Post
      It would be nice to here their specific gripes so others are cautious about LLVM.
      where is +1 button ?
      or, considering that amd guys did use llvm successfully, it can turn out to be not llvm's fault. but that sill would be nice to hear
      Last edited by pal666; 08 February 2014, 03:45 AM.

      Comment


      • #4
        I can sort of understand this, even though I don't know LLVM well. Just observing the public development is quite interesting.

        AMD has been working on LLVM backends for r600 and radeonsi hardware for a *long* time, since late 2011, with various contributors working on it. If we look at the current situation, it's unfortunately still not that good or usable: code generation quality is mediocre, there are many bugs, and there are some yet to be solved principal issues (performance, error handling). r600-class hardware support isn't even feature complete. LLVM seems like an incredibly hard to tame beast, at least for typical GPU architectures and usage as part of a graphics API.

        On the other hand, Vadim has single-handedly implemented a much superior r600 compiler backend from scratch. It is much more reliable and produces better code, yet it seems to be simpler than the LLVM backend, and has less lines of code.

        Moreover, custom from-scratch backends have also worked rather well for other Gallium drivers, like nouveau or freedreno.

        Comment


        • #5
          Originally posted by pal666 View Post
          where is +1 button ?
          or, considering that amd guys did use llvm successfully, it can turn out to be not llvm's fault. but that sill would be nice to hear
          Half asleep here. Obviously I meant 'hear'.

          Comment


          • #6
            Intel does not use Gallium3D too... funny thing.

            Maybe the problem is that every Intel graphic chip is totally different. So, writing a LLVM backend for every chip is not really useful. Who knows...

            Comment


            • #7
              Originally posted by -MacNuke- View Post
              Intel does not use Gallium3D too... funny thing. Maybe the problem is that every Intel graphic chip is totally different. So, writing a LLVM backend for every chip is not really useful. Who knows...
              As you probably know floating points are used a lot in graphic actions, so they are probably hitting the wall regarding controlling the units and interfacing libraries with stubs and thunks, etc.

              Comment


              • #8
                Originally posted by brent View Post
                I can sort of understand this, even though I don't know LLVM well. Just observing the public development is quite interesting.

                AMD has been working on LLVM backends for r600 and radeonsi hardware for a *long* time, since late 2011, with various contributors working on it. If we look at the current situation, it's unfortunately still not that good or usable: code generation quality is mediocre, there are many bugs, and there are some yet to be solved principal issues (performance, error handling). r600-class hardware support isn't even feature complete. LLVM seems like an incredibly hard to tame beast, at least for typical GPU architectures and usage as part of a graphics API.

                On the other hand, Vadim has single-handedly implemented a much superior r600 compiler backend from scratch. It is much more reliable and produces better code, yet it seems to be simpler than the LLVM backend, and has less lines of code.

                Moreover, custom from-scratch backends have also worked rather well for other Gallium drivers, like nouveau or freedreno.
                Reading the thread speaks volumes about Intel staff vs. AMD staff.

                Perhaps Mesa wants the LLVM/Clang project to come in and clean up the code? Or have AMD completely take over the project? Seriously, piss poor code behavior is a product of the developer, not a metric of the tools used.

                Or perhaps Intel is pissed-off Apple won't lift a finger in helping them in their shader compiler work? Who cares? They've got the money and seeing as they have done a half-ass job with their OpenMP 3.1 being ready, and no actual work since the only code dump several months back, I doubt anyone in the LLVM/Clang greater community is going to run to their aid or defense, other than some GCC advocate ready to bend over backwards to swallow Psi into the GCC codebase that Intel couldn't convince anyone in the LLVM/Clang group to just accept.
                Last edited by Marc Driftmeyer; 08 February 2014, 04:57 AM.

                Comment


                • #9
                  Originally posted by Marc Driftmeyer View Post
                  Reading the thread speaks volumes about Intel staff vs. AMD staff.

                  Perhaps Mesa wants the LLVM/Clang project to come in and clean up the code? Or have AMD completely take over the project? Seriously, piss poor code behavior is a product of the developer, not a metric of the tools used.

                  Or perhaps Intel is pissed-off Apple won't lift a finger in helping them in their shader compiler work? Who cares? They've got the money and seeing as they have done a half-ass job with their OpenMP 3.1 being ready, and no actual work since the only code dump several months back, I doubt anyone in the LLVM/Clang greater community is going to run to their aid or defense, other than some GCC advocate ready to bend over backwards to swallow Psi into the GCC codebase that Intel couldn't convince anyone in the LLVM/Clang group to just accept.
                  Why did you completely ignore the fact LLVM still works badly for AMD after years of work?

                  Comment


                  • #10
                    Originally posted by Marc Driftmeyer View Post
                    Reading the thread speaks volumes about Intel staff vs. AMD staff.

                    Perhaps Mesa wants the LLVM/Clang project to come in and clean up the code? Or have AMD completely take over the project? Seriously, piss poor code behavior is a product of the developer, not a metric of the tools used.

                    Or perhaps Intel is pissed-off Apple won't lift a finger in helping them in their shader compiler work? Who cares? They've got the money and seeing as they have done a half-ass job with their OpenMP 3.1 being ready, and no actual work since the only code dump several months back, I doubt anyone in the LLVM/Clang greater community is going to run to their aid or defense, other than some GCC advocate ready to bend over backwards to swallow Psi into the GCC codebase that Intel couldn't convince anyone in the LLVM/Clang group to just accept.
                    Did you consider the possibility that LLVM is just not the right tool for the job?

                    Comment

                    Working...
                    X