Announcement

Collapse
No announcement yet.

Compute Shader Code Begins Landing For Gallium3D

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Compute Shader Code Begins Landing For Gallium3D

    Phoronix: Compute Shader Code Begins Landing For Gallium3D

    Samuel Pitoiset began pushing his Gallium3D Mesa state tracker changes this morning for supporting compute shaders via the GL_ARB_compute_shader extension...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Originally posted by phoronix View Post
    Phoronix: Compute Shader Code Begins Landing For Gallium3D

    Mesa 11.2 though is being branched next week so unless there's a mad rush at the end, the ARB_compute_shader support won't appear for the Gallium3D hardware drivers in a released version of Mesa for another three months.
    Actually ARB_compute_shader requires GL 4.2, and while mesa tends to ignore such restrictions, I think that most compute shaders will want to make use of images. As a result, actually exposing ARB_compute_shader will have to wait on images being available, which will most definitely not make the branchpoint.

    Comment


    • #3
      Originally posted by imirkin View Post

      Actually ARB_compute_shader requires GL 4.2, and while mesa tends to ignore such restrictions, I think that most compute shaders will want to make use of images. As a result, actually exposing ARB_compute_shader will have to wait on images being available, which will most definitely not make the branchpoint.
      What's the difference between textures and images and what makes images so hard to implement? I assume an image is basically one mip level of a texture, but where is the catch that makes this so difficult?

      Comment


      • #4
        Originally posted by CrystalGamma View Post

        What's the difference between textures and images and what makes images so hard to implement? I assume an image is basically one mip level of a texture, but where is the catch that makes this so difficult?
        Images allow reading and writing at the same time

        Comment


        • #5
          Originally posted by CrystalGamma View Post

          What's the difference between textures and images and what makes images so hard to implement? I assume an image is basically one mip level of a texture, but where is the catch that makes this so difficult?
          As mentioned earlier, images are writable. They don't go through the texturing logic at all. On Fermi, there's a whole separate set of instructions to deal with them. And unlike the texturing instructions which take care of (nearly) all the various cases, these instructions don't. They come in 1d, 2d, and "e2d" varieties. The coordinates need various fixups, I haven't the faintest clue how accessing other layers works, and 3d textures have been a total fail. Working it all out will require a lot of careful tracing of the blob for all the various cases.

          The simple stuff is done (1d, 2d images) and largely works. But I haven't even been able to get buffer images working, much less 1darray/2darray/3d. It requires the driver sending down various parameters in a constbuf and then somehow integrating those parameters into the computed coordinates. Fun!

          And then we get to repeat this fun exercise on Kepler. And then again on Maxwell. (Really hoping that Kepler2 is the same as Kepler1 except for the instruction encodings.)

          Comment

          Working...
          X