Announcement

Collapse
No announcement yet.

Gallium3D OpenCL GSoC Near-Final Status Update

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Gallium3D OpenCL GSoC Near-Final Status Update

    Phoronix: Gallium3D OpenCL GSoC Near-Final Status Update

    Google's 2011 Summer of Code is coming to an end with today being one of the soft deadlines for the student developers to finish up work on their summer projects. Of the Mesa / GSoC summer projects this year, I mentioned the MLAA support for Gallium3D was a success with the post-processing infrastructure and morphological anti-aliasing support seeking mainline inclusion into Mesa. Here's a status update on how the Gallium3D OpenCL support has come over the summer...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Unfortunately, there's no word right now on future plans for the coming months -- if he will even be contributing to the Mesa project following the formal end of GSoC 2011 -- or eventual plans for graphics driver support and mainline integration.
    I believe he has already stated his intent to continue some work on the project after GSoC ends, at a lesser rate. That was why he tried to get the complicated stuff done this summer and then later he can add in all the simple builtin functions that are missing like clamp, dot products, etc. that have to be done for all the different parameters types.

    Adding GPU support will likely be another major project. You'd need to write an LLVM backend to generate TGSI instructions, for a start. Not sure how much more work beyond that it would need.

    Comment


    • #3
      Originally posted by smitty3268 View Post
      Adding GPU support will likely be another major project. You'd need to write an LLVM backend to generate TGSI instructions, for a start. Not sure how much more work beyond that it would need.
      Likely? If you feel like writing another llvm backend for nvidia and AMD GPUs, feel free to . But it won't be easy at all! Only one company has done this and we should soon hear from them

      As for sending the code to the GPU, it is relatively easy.

      Comment


      • #4
        Originally posted by M?P?F View Post
        Likely? If you feel like writing another llvm backend for nvidia and AMD GPUs, feel free to . But it won't be easy at all! Only one company has done this and we should soon hear from them

        As for sending the code to the GPU, it is relatively easy.
        You talk in riddles old man

        Comment


        • #5
          Originally posted by smitty3268 View Post
          Adding GPU support will likely be another major project. You'd need to write an LLVM backend to generate TGSI instructions, for a start. Not sure how much more work beyond that it would need.
          You'd have to make major revisions to TGSI before even starting on that. For starters, TGSI has no concept of memory. For another, TGSI (and GPUs) require structured control flow, which is something that LLVM doesn't understand.

          A better option, in my opinion, would be to let Gallium drivers take LLVM IR. That way, there would be only one massive project involved instead of two.

          Comment


          • #6
            I haven't looked at the implementation at all, so I could be wrong, but I'm going to assume that the LLVM backend is just for the CPU implementation, and that it's intended to compile to TGSI or such directly for GPU-based backends.

            Comment


            • #7
              Originally posted by Plombo View Post
              You'd have to make major revisions to TGSI before even starting on that. For starters, TGSI has no concept of memory. For another, TGSI (and GPUs) require structured control flow, which is something that LLVM doesn't understand.

              A better option, in my opinion, would be to let Gallium drivers take LLVM IR. That way, there would be only one massive project involved instead of two.
              I'm no gallium dev like you but I meant that given the project already uses LLVM to generate the CPU code, it would be easier to use it too for the GPU. Seems I was wrong for good reasons. Thanks for the enlightment

              Comment


              • #8
                Originally posted by M?P?F View Post
                Likely? If you feel like writing another llvm backend for nvidia and AMD GPUs, feel free to . But it won't be easy at all! Only one company has done this and we should soon hear from them

                As for sending the code to the GPU, it is relatively easy.
                Ha, yes, understatement of the year. I only meant to say that while we can likely expect further improvements on the code here, i don't think you can expect this developer to finish coding a full GPU implementation. That's going to take at LEAST another GSoC, and possibly more. I wasn't aware of what all needed to be done - modifying Gallium drivers to accept LLVM directly seems like a good idea, but that's probably multiple GSoC's right there just for that part.

                And wasn't the developers consensus largely that they weren't that hot about LLVM? IIRC, it sounded like the least controversial option to drop TGSI was to replace it with the GLSL IR, and some devs in particular were pretty anti-LLVM. I have no idea if that supports the sort of operations it would need to or not.

                Comment


                • #9
                  Originally posted by smitty3268 View Post
                  Ha, yes, understatement of the year. I only meant to say that while we can likely expect further improvements on the code here, i don't think you can expect this developer to finish coding a full GPU implementation. That's going to take at LEAST another GSoC, and possibly more. I wasn't aware of what all needed to be done - modifying Gallium drivers to accept LLVM directly seems like a good idea, but that's probably multiple GSoC's right there just for that part.

                  And wasn't the developers consensus largely that they weren't that hot about LLVM? IIRC, it sounded like the least controversial option to drop TGSI was to replace it with the GLSL IR, and some devs in particular were pretty anti-LLVM. I have no idea if that supports the sort of operations it would need to or not.
                  I'm not the right guy to ask. I'll add a suggestion to talk about that at the XDC 2011. We know how to execute kernels on the nvidia boards, but we need to discuss the architecture that will generate this code.

                  Comment


                  • #10
                    I'm trying to get at least one of our developers up/down (depending on the developer ) to XDC to talk about this as well. I think it's fair to say that we are leaning towards using LLVM IR for compute but staying with TGSI for graphics (or potentially GLSL IR if the community moves that way).

                    Obviously having two different IRs implies either some stacking or some duplicated work, so it is a good topic for discussion.

                    LLVM IR didn't seem like a great fit for graphics on GPUs with vector or VLIW shader hardware since so much of the workload was naturally 3- or 4-component vector operations, but for compute that isn't necessarily such an issue.
                    Test signature

                    Comment

                    Working...
                    X