Page 1 of 2 12 LastLast
Results 1 to 10 of 11

Thread: In OpenCL Push, AMD Makes Progress With LLVM For Gallium3D

  1. #1
    Join Date
    Jan 2007
    Posts
    14,561

    Default In OpenCL Push, AMD Makes Progress With LLVM For Gallium3D

    Phoronix: In OpenCL Push, AMD Makes Progress With LLVM For Gallium3D

    On Sunday there was a new RFC patch-set by Tom Stellard of AMD with a new TGSI to LLVM conversion interface. The AMD R600 Gallium3D driver with its LLVM shader back-end was also updated, which is a prerequisite to OpenCL support...

    http://www.phoronix.com/vr.php?view=MTA0MzU

  2. #2
    Join Date
    Sep 2008
    Posts
    989

    Default

    "Tom Stellard of AMD"

    Somehow that compound noun seems extremely fitting.

  3. #3
    Join Date
    Dec 2009
    Posts
    338

    Default

    I reckon the question is: How far are we from opencl running on r600g?
    Any guesses?

  4. #4
    Join Date
    Mar 2007
    Location
    DG, IL, USA
    Posts
    195

    Default

    Another step closer to being able to run FAH & fold with the GPU
    Those who would give up Essential Liberty to purchase a little Temporary Safety,deserve neither Liberty nor Safety.
    Ben Franklin 1755

  5. #5
    Join Date
    Nov 2008
    Location
    Madison, WI, USA
    Posts
    864

    Default

    Quote Originally Posted by HokTar View Post
    I reckon the question is: How far are we from opencl running on r600g?
    Any guesses?
    I'm not qualified to answer, as I'm not directly involved with the Clover or LLVM-backend work, but if we can tie together the previous work to create an OpenCL to LLVM state tracker with what Tom Stellard is doing, we could be getting there. I wouldn't be surprised if it's another 6 months before we see something in a shipping release, but we might have a usable prototype in the next few months.

  6. #6
    Join Date
    Jul 2011
    Location
    florida usa
    Posts
    80

    Default

    is the goal eventually to have gallium3d use both tgsi AND llvm ir as a transport ir for gallium3d? to me that sounds kinda bad since now your basically making the work required to make a driver back end or even a state tracker more complex. i know there was some discussions about whether tgsi was good enough to be used as a general purpose ir as much as opencl would need, and some talk about whether llvm would be adequate for efficiently supporting opengl and other graphics requirements.

    i personally like the idea of just having tgsi be the only ir when it comes to communicating to a gallium driver. its still version 0.41 if i remember correctly, and while people are using it almost as if its a finished product, its still in a pretty malleable state. sticking with just tgsi would make it a lot easier for supportive applications to work with gallium3d. I remember a project announced, and while never heard about again is a pretty good way to solve the problem of a remote display in a non X environment while still allowing acceleration of the display, even more so than what is possible in X, by transmitting the tgsi IR through the network to the display side to be executed on that side. that would be a lot more work to try and do opencl over llvm ir since it would require supporting the llvm ir as well.

  7. #7
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,418

    Default

    The goal is to have a single IR but it's not clear yet what that IR should be.

    When we discussed this at XDS there wasn't a strong preference in any direction, so we decided to start with LLVM IR for a couple of reasons :

    - clover already generated LLVM IR but not TGSI
    - we were planning to use LLVM in the shader compiler for SI and above so we could leverage some of that work for OpenCL

    Francisco is working in the same area but using Nouveau, where LLVM IR support wasn't as readily available so he went the TGSI route. My expectation is that we'll all regroup at the next XDS (hopefully bringing some compute experience with LLVM IR, TGSI *and* GLSL IR) and figure out which direction makes the most sense.

    One argument is that using a scalar-oriented IR (ie LLVM IR) makes more sense for compute given general hardware architecture trends towards scalar SIMD, but it may not be time yet. We'll know more in a few months. One interesting thing to watch is if there will be any convergence in shader compilers below the hardware layer (Mesa IR / TGSI / Gallium3D) as the hardware converges. When Gallium3D was first presented everyone was expecting LLVM to be the foundation of everyone's shader compilers, but at the time the mix of VLIW, vector and scalar hardware made that a lot harder than it first seemed so each HW driver ended up with a shader compiler largely designed around the unique target hardware.
    Last edited by bridgman; 01-20-2012 at 11:29 PM.

  8. #8
    Join Date
    Nov 2008
    Location
    Germany
    Posts
    5,411

    Default

    Quote Originally Posted by bridgman View Post
    When Gallium3D was first presented everyone was expecting LLVM to be the foundation of everyone's shader compilers, but at the time the mix of VLIW, vector and scalar hardware made that a lot harder than it first seemed so each HW driver ended up with a shader compiler largely designed around the unique target hardware.
    why not a new a new shader compiler language with a more universal focus without "designed around unique target hardware" ??

    this really sound for me all solutions right now are wrong.

  9. #9
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,418

    Default

    The main reason developers implement hardware-specific shader compilers is that they take a *lot* less time to get running. Maybe a factor of 5-10. A few different developers (including us) have started working on new hardware generations saying "I'm going to write a great compiler this time" but didn't have anything like the time required and ended up writing something simple to let work on the rest of the driver proceed.

    That said, if you make a single IR that "natively handles everything" you can actually end up in a worse situation than two IRs, since you need to support "using it this way" for one hardware arch, "using it that way" for another hardware arch, plus everything in between. What you really want is a relatively "tight" common IR (without natively supporting everything on the planet) with highly efficient conversion to HW-specific forms that can be used to generate code, but that's pretty much the hardest approach of all. It was the idea behind LunarGLASS as I understand it - extending a standard platform via middleware so that it could handle a wider range of GPU hardware architectures, with the ability to have a generic IR at one level and an HW-specific IR at another level with powerful tools to convert between them without losing performance.

    There seem to be three likely ways this could play out :

    1. TGSI for graphics, LLVM IR for compute

    2. TGSI for everything

    3. LLVM IR for everything

    Right now I think #1 and #2 are probably equal likelihood, although if #2 turns out to work well I expect it would be preferred for simplicity. Option #3 seems premature without something like LunarGLASS.

    Again, in the end I think the decision will be shaped by whether or not any standardization emerges in the shader compilers below the HW layer. If so, there'll be a strong argument for using TGSI for graphics and the "native IR" for compute; if not, then "TGSI for everything" will probably come out on top.
    Last edited by bridgman; 01-21-2012 at 09:35 AM.

  10. #10
    Join Date
    Jul 2011
    Location
    florida usa
    Posts
    80

    Default

    i guess its a hard decision. one thing i like about going all tgsi is that it would be completely tailorable to the requirements of GPU's, while llvm is nice because its already a well established and known IR but it might not be easy to use for GPU's.

    what actually is going on with lunarGLASS. i haven't heard anything really form them in over a year. i know they are trying to use LLVM throughout the entire graphics stack, which i guess is in the end, the most desirable solution but sounds like the hardest.

    that would mean as far as accelerating opengl you would have to have the glsl compiler output llvm ir, then the rest of opengl output llvm, then gallium3d would have to take in and output llvm to the underlying hardware drivers that would then compile it into machine code for that particular hardware.

    sofar llvm only exists in llvmpipe cpu backend and in clover, so its not all that well integrated into the gallium driver stack yet as far as i know.

    it seems to me its a very important goal to try and stabilize gallium3d with a clear and precise plan for the future of the api's and technologies used. we WANT to try and attract all these gpu vendors who are now popping up seemingly out of nowhere (just a couple years ago the only graphics companies ever mentioned were amd, nvidia, intel and sometimes via,now we are stating to hear about all of these gpu's paired with arm cpu's like mali and powervr) to use gallium3d as their end all solution to quickly and efficiently support multiple os's and api's. (you know, what gallium3d was designed to do in the frirst place)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •