Page 1 of 5 123 ... LastLast
Results 1 to 10 of 50

Thread: Intel Linux Graphics Developers No Longer Like LLVM

  1. #1
    Join Date
    Jan 2007
    Posts
    13,408

    Default Intel Linux Graphics Developers No Longer Like LLVM

    Phoronix: Intel Linux Graphics Developers No Longer Like LLVM

    Well, it turns out the open-source Intel Linux graphics driver developers are no longer interested in having an LLVM back-end for their graphics driver...

    http://www.phoronix.com/vr.php?view=MTU5NjQ

  2. #2
    Join Date
    Jan 2011
    Posts
    444

    Default

    When someone describes their coding as table flipping moments, you know they really have done their best to like it.

    It would be nice to here their specific gripes so others are cautious about LLVM.

  3. #3
    Join Date
    Apr 2013
    Posts
    121

    Default

    Quote Originally Posted by e8hffff View Post
    It would be nice to here their specific gripes so others are cautious about LLVM.
    where is +1 button ?
    or, considering that amd guys did use llvm successfully, it can turn out to be not llvm's fault. but that sill would be nice to hear
    Last edited by pal666; 02-08-2014 at 02:45 AM.

  4. #4
    Join Date
    Jan 2010
    Posts
    334

    Default

    I can sort of understand this, even though I don't know LLVM well. Just observing the public development is quite interesting.

    AMD has been working on LLVM backends for r600 and radeonsi hardware for a *long* time, since late 2011, with various contributors working on it. If we look at the current situation, it's unfortunately still not that good or usable: code generation quality is mediocre, there are many bugs, and there are some yet to be solved principal issues (performance, error handling). r600-class hardware support isn't even feature complete. LLVM seems like an incredibly hard to tame beast, at least for typical GPU architectures and usage as part of a graphics API.

    On the other hand, Vadim has single-handedly implemented a much superior r600 compiler backend from scratch. It is much more reliable and produces better code, yet it seems to be simpler than the LLVM backend, and has less lines of code.

    Moreover, custom from-scratch backends have also worked rather well for other Gallium drivers, like nouveau or freedreno.

  5. #5
    Join Date
    Jan 2011
    Posts
    444

    Default

    Quote Originally Posted by pal666 View Post
    where is +1 button ?
    or, considering that amd guys did use llvm successfully, it can turn out to be not llvm's fault. but that sill would be nice to hear
    Half asleep here. Obviously I meant 'hear'.

  6. #6
    Join Date
    Nov 2012
    Posts
    20

    Default

    Intel does not use Gallium3D too... funny thing.

    Maybe the problem is that every Intel graphic chip is totally different. So, writing a LLVM backend for every chip is not really useful. Who knows...

  7. #7
    Join Date
    Jan 2011
    Posts
    444

    Default

    Quote Originally Posted by -MacNuke- View Post
    Intel does not use Gallium3D too... funny thing. Maybe the problem is that every Intel graphic chip is totally different. So, writing a LLVM backend for every chip is not really useful. Who knows...
    As you probably know floating points are used a lot in graphic actions, so they are probably hitting the wall regarding controlling the units and interfacing libraries with stubs and thunks, etc.

  8. #8
    Join Date
    Oct 2012
    Location
    Washington State
    Posts
    361

    Default

    Quote Originally Posted by brent View Post
    I can sort of understand this, even though I don't know LLVM well. Just observing the public development is quite interesting.

    AMD has been working on LLVM backends for r600 and radeonsi hardware for a *long* time, since late 2011, with various contributors working on it. If we look at the current situation, it's unfortunately still not that good or usable: code generation quality is mediocre, there are many bugs, and there are some yet to be solved principal issues (performance, error handling). r600-class hardware support isn't even feature complete. LLVM seems like an incredibly hard to tame beast, at least for typical GPU architectures and usage as part of a graphics API.

    On the other hand, Vadim has single-handedly implemented a much superior r600 compiler backend from scratch. It is much more reliable and produces better code, yet it seems to be simpler than the LLVM backend, and has less lines of code.

    Moreover, custom from-scratch backends have also worked rather well for other Gallium drivers, like nouveau or freedreno.
    Reading the thread speaks volumes about Intel staff vs. AMD staff.

    Perhaps Mesa wants the LLVM/Clang project to come in and clean up the code? Or have AMD completely take over the project? Seriously, piss poor code behavior is a product of the developer, not a metric of the tools used.

    Or perhaps Intel is pissed-off Apple won't lift a finger in helping them in their shader compiler work? Who cares? They've got the money and seeing as they have done a half-ass job with their OpenMP 3.1 being ready, and no actual work since the only code dump several months back, I doubt anyone in the LLVM/Clang greater community is going to run to their aid or defense, other than some GCC advocate ready to bend over backwards to swallow Psi into the GCC codebase that Intel couldn't convince anyone in the LLVM/Clang group to just accept.
    Last edited by Marc Driftmeyer; 02-08-2014 at 03:57 AM.

  9. #9
    Join Date
    Feb 2008
    Location
    Linuxland
    Posts
    4,729

    Default

    Quote Originally Posted by Marc Driftmeyer View Post
    Reading the thread speaks volumes about Intel staff vs. AMD staff.

    Perhaps Mesa wants the LLVM/Clang project to come in and clean up the code? Or have AMD completely take over the project? Seriously, piss poor code behavior is a product of the developer, not a metric of the tools used.

    Or perhaps Intel is pissed-off Apple won't lift a finger in helping them in their shader compiler work? Who cares? They've got the money and seeing as they have done a half-ass job with their OpenMP 3.1 being ready, and no actual work since the only code dump several months back, I doubt anyone in the LLVM/Clang greater community is going to run to their aid or defense, other than some GCC advocate ready to bend over backwards to swallow Psi into the GCC codebase that Intel couldn't convince anyone in the LLVM/Clang group to just accept.
    Why did you completely ignore the fact LLVM still works badly for AMD after years of work?

  10. #10
    Join Date
    Jun 2010
    Posts
    147

    Default

    Quote Originally Posted by Marc Driftmeyer View Post
    Reading the thread speaks volumes about Intel staff vs. AMD staff.

    Perhaps Mesa wants the LLVM/Clang project to come in and clean up the code? Or have AMD completely take over the project? Seriously, piss poor code behavior is a product of the developer, not a metric of the tools used.

    Or perhaps Intel is pissed-off Apple won't lift a finger in helping them in their shader compiler work? Who cares? They've got the money and seeing as they have done a half-ass job with their OpenMP 3.1 being ready, and no actual work since the only code dump several months back, I doubt anyone in the LLVM/Clang greater community is going to run to their aid or defense, other than some GCC advocate ready to bend over backwards to swallow Psi into the GCC codebase that Intel couldn't convince anyone in the LLVM/Clang group to just accept.
    Did you consider the possibility that LLVM is just not the right tool for the job?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •