Page 1 of 8 123 ... LastLast
Results 1 to 10 of 72

Thread: Gallium3D OpenCL GSoC Near-Final Status Update

  1. #1
    Join Date
    Jan 2007
    Posts
    14,770

    Default Gallium3D OpenCL GSoC Near-Final Status Update

    Phoronix: Gallium3D OpenCL GSoC Near-Final Status Update

    Google's 2011 Summer of Code is coming to an end with today being one of the soft deadlines for the student developers to finish up work on their summer projects. Of the Mesa / GSoC summer projects this year, I mentioned the MLAA support for Gallium3D was a success with the post-processing infrastructure and morphological anti-aliasing support seeking mainline inclusion into Mesa. Here's a status update on how the Gallium3D OpenCL support has come over the summer...

    http://www.phoronix.com/vr.php?view=OTgwOQ

  2. #2
    Join Date
    Oct 2008
    Posts
    3,124

    Default

    Unfortunately, there's no word right now on future plans for the coming months -- if he will even be contributing to the Mesa project following the formal end of GSoC 2011 -- or eventual plans for graphics driver support and mainline integration.
    I believe he has already stated his intent to continue some work on the project after GSoC ends, at a lesser rate. That was why he tried to get the complicated stuff done this summer and then later he can add in all the simple builtin functions that are missing like clamp, dot products, etc. that have to be done for all the different parameters types.

    Adding GPU support will likely be another major project. You'd need to write an LLVM backend to generate TGSI instructions, for a start. Not sure how much more work beyond that it would need.

  3. #3
    Join Date
    Feb 2009
    Location
    France
    Posts
    306

    Default

    Quote Originally Posted by smitty3268 View Post
    Adding GPU support will likely be another major project. You'd need to write an LLVM backend to generate TGSI instructions, for a start. Not sure how much more work beyond that it would need.
    Likely? If you feel like writing another llvm backend for nvidia and AMD GPUs, feel free to . But it won't be easy at all! Only one company has done this and we should soon hear from them

    As for sending the code to the GPU, it is relatively easy.

  4. #4
    Join Date
    Jan 2009
    Posts
    1,676

    Default

    Quote Originally Posted by MPF View Post
    Likely? If you feel like writing another llvm backend for nvidia and AMD GPUs, feel free to . But it won't be easy at all! Only one company has done this and we should soon hear from them

    As for sending the code to the GPU, it is relatively easy.
    You talk in riddles old man

  5. #5
    Join Date
    Sep 2010
    Posts
    146

    Default

    Quote Originally Posted by smitty3268 View Post
    Adding GPU support will likely be another major project. You'd need to write an LLVM backend to generate TGSI instructions, for a start. Not sure how much more work beyond that it would need.
    You'd have to make major revisions to TGSI before even starting on that. For starters, TGSI has no concept of memory. For another, TGSI (and GPUs) require structured control flow, which is something that LLVM doesn't understand.

    A better option, in my opinion, would be to let Gallium drivers take LLVM IR. That way, there would be only one massive project involved instead of two.

  6. #6
    Join Date
    Nov 2007
    Posts
    1,024

    Default

    I haven't looked at the implementation at all, so I could be wrong, but I'm going to assume that the LLVM backend is just for the CPU implementation, and that it's intended to compile to TGSI or such directly for GPU-based backends.

  7. #7
    Join Date
    Feb 2009
    Location
    France
    Posts
    306

    Default

    Quote Originally Posted by Plombo View Post
    You'd have to make major revisions to TGSI before even starting on that. For starters, TGSI has no concept of memory. For another, TGSI (and GPUs) require structured control flow, which is something that LLVM doesn't understand.

    A better option, in my opinion, would be to let Gallium drivers take LLVM IR. That way, there would be only one massive project involved instead of two.
    I'm no gallium dev like you but I meant that given the project already uses LLVM to generate the CPU code, it would be easier to use it too for the GPU. Seems I was wrong for good reasons. Thanks for the enlightment

  8. #8
    Join Date
    Oct 2008
    Posts
    3,124

    Default

    Quote Originally Posted by MPF View Post
    Likely? If you feel like writing another llvm backend for nvidia and AMD GPUs, feel free to . But it won't be easy at all! Only one company has done this and we should soon hear from them

    As for sending the code to the GPU, it is relatively easy.
    Ha, yes, understatement of the year. I only meant to say that while we can likely expect further improvements on the code here, i don't think you can expect this developer to finish coding a full GPU implementation. That's going to take at LEAST another GSoC, and possibly more. I wasn't aware of what all needed to be done - modifying Gallium drivers to accept LLVM directly seems like a good idea, but that's probably multiple GSoC's right there just for that part.

    And wasn't the developers consensus largely that they weren't that hot about LLVM? IIRC, it sounded like the least controversial option to drop TGSI was to replace it with the GLSL IR, and some devs in particular were pretty anti-LLVM. I have no idea if that supports the sort of operations it would need to or not.

  9. #9
    Join Date
    Feb 2009
    Location
    France
    Posts
    306

    Default

    Quote Originally Posted by smitty3268 View Post
    Ha, yes, understatement of the year. I only meant to say that while we can likely expect further improvements on the code here, i don't think you can expect this developer to finish coding a full GPU implementation. That's going to take at LEAST another GSoC, and possibly more. I wasn't aware of what all needed to be done - modifying Gallium drivers to accept LLVM directly seems like a good idea, but that's probably multiple GSoC's right there just for that part.

    And wasn't the developers consensus largely that they weren't that hot about LLVM? IIRC, it sounded like the least controversial option to drop TGSI was to replace it with the GLSL IR, and some devs in particular were pretty anti-LLVM. I have no idea if that supports the sort of operations it would need to or not.
    I'm not the right guy to ask. I'll add a suggestion to talk about that at the XDC 2011. We know how to execute kernels on the nvidia boards, but we need to discuss the architecture that will generate this code.

  10. #10
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,453

    Default

    I'm trying to get at least one of our developers up/down (depending on the developer ) to XDC to talk about this as well. I think it's fair to say that we are leaning towards using LLVM IR for compute but staying with TGSI for graphics (or potentially GLSL IR if the community moves that way).

    Obviously having two different IRs implies either some stacking or some duplicated work, so it is a good topic for discussion.

    LLVM IR didn't seem like a great fit for graphics on GPUs with vector or VLIW shader hardware since so much of the workload was naturally 3- or 4-component vector operations, but for compute that isn't necessarily such an issue.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •