Announcement

Collapse
No announcement yet.

OpenCL Support Atop Gallium3D Is Here, Sort Of

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • phoronix
    started a topic OpenCL Support Atop Gallium3D Is Here, Sort Of

    OpenCL Support Atop Gallium3D Is Here, Sort Of

    Phoronix: OpenCL Support Atop Gallium3D Is Here, Sort Of

    OpenCL is present in NVIDIA's Linux driver as well as the just-released Mac OS X 10.6, but there is support for the Open Computing Language coming forward in the open-source world through the Gallium3D driver infrastructure.Back in February we heard a goal of having OpenCL in Gallium3D by this summer and then in May we heard that it was hopefully soon along with an OpenGL 3.1 state tracker. Well, it's just about September and the summer is nearing an end, but we now have OpenCL support in the works.Over on the FreeDesktop.org Git server is now a mesa/clover repository...

    http://www.phoronix.com/vr.php?view=NzQ5Mw

  • nanonyme
    replied
    I guess mostly what I had in mind was eg OpenGL version 1.5.x, glGetVersion(GL_MAJOR) returns 1, glGetVersion(GL_MINOR) returns 5.x, glGetVersion(GL_MAJOR|GL_MINOR) returns 1.5.x (as in, bitwise or on 01(bin) and 10(bin) => 11(bin))
    ps. This is useless.

    Leave a comment:


  • nhaehnle
    replied
    Originally posted by nanonyme View Post
    Hmm, as in with the principle GL_MINOR=1, GL_MINOR=2, GL_MAJOR|GL_MINOR=3, then pull first part for 1, second part for 2, full version with?
    Huh?


    (I need to write at least 10 characters, so yeah: can you elaborate? Because I totally didn't understand your post)

    Leave a comment:


  • nanonyme
    replied
    Hmm, as in with the principle GL_MINOR=1, GL_MINOR=2, GL_MAJOR|GL_MINOR=3, then pull first part for 1, second part for 2, full version with?
    Last edited by nanonyme; 09-07-2009, 12:56 PM.

    Leave a comment:


  • nhaehnle
    replied
    Originally posted by BlackStar View Post
    Edit: Digging around, I think this post is the root of the GL_VERSION issue. The first number in "1.4 (2.1 Mesa 7.0.1)" is the highest OpenGL version that can be officially supported under the current GLX implementation when using indirect rendering. Which means that I either hit a Mesa bug or was just plain lucky by only using supported methods.
    Okay, that does make a lot of sense.

    Still, the whole confusion wouldn't even exist with glGetInteger(GL_[MAJOR|MINOR]).
    You do realize that when that is implemented, glGetInteger(GL_MAJOR|GL_MINOR) would return version 1.4 in the situation where the GL_VERSION string is "1.4 (2.1 ...)", right? You would get exactly what you get with atoi or GLEE-style parsing.

    So yeah, it might be a little more convenient for application developers, but it wouldn't actually change anything.

    Leave a comment:


  • V!NCENT
    replied
    Originally posted by deanjo View Post
    No more that the algorithm itself has to be of a parallel nature. For example there are many ways of calculating pi and some algorithms are of a serial nature others are of a parallel nature. You could get these algorithms running on a GPU, but the results maybe slower because the codes serial nature, if you switch to an algorithm that is parallel in nature you can have massive speed gains.
    Or, of course, if you have to do a lot of independent serial calculations and like to do as much of them at the same time as possible... which is why OpenCL interests me.

    Leave a comment:


  • BlackStar
    replied
    Originally posted by nhaehnle View Post
    In any case, this evidence tends to be in favor of having a non-restricted version string.
    On the contrary, this proves why glGetInteger(GL_MAJOR) and glGetInteger(GL_MINOR) are superior to any solution involving string parsing.

    These methods can be trivially supported in Mesa 7.6 without advertizing OpenGL 3.0. Nvidia and Ati binary drivers expose these methods even on 2.1 contexts - there's no reason why Mesa cannot do the same.

    Edit: Digging around, I think this post is the root of the GL_VERSION issue. The first number in "1.4 (2.1 Mesa 7.0.1)" is the highest OpenGL version that can be officially supported under the current GLX implementation when using indirect rendering. Which means that I either hit a Mesa bug or was just plain lucky by only using supported methods.

    Still, the whole confusion wouldn't even exist with glGetInteger(GL_[MAJOR|MINOR]).
    Last edited by BlackStar; 09-04-2009, 12:52 PM.

    Leave a comment:


  • deanjo
    replied
    Originally posted by V!NCENT View Post
    Do you mean the extra CPU burden/time for getting the compute kernel at the graphics card and back again has to be less than letting the CPU do the calculation(s) itself?
    No more that the algorithm itself has to be of a parallel nature. For example there are many ways of calculating pi and some algorithms are of a serial nature others are of a parallel nature. You could get these algorithms running on a GPU, but the results maybe slower because the codes serial nature, if you switch to an algorithm that is parallel in nature you can have massive speed gains.

    Leave a comment:


  • nhaehnle
    replied
    Originally posted by BlackStar View Post
    Checking back to my code comments from 2007, I found the following:

    On Mesa 7.0.1, Mesa/soft returns: "1.4 (2.1 Mesa 7.0.1)". On the same hardware (R500), install fglrx 8.2 and you get "2.1.7281 ...". Change to indirect and you get "1.4 (2.1.7281 ...)"

    So how do you interpret those strings? 1.4 only makes sense as the server version, because Mesa 7.0 sure as hell isn't limited to 1.4 in software rendering - and neither is R500 w/ fglrx.
    The version reported by Mesa/soft looks wrong. This may have been a Mesa bug, which has clearly been fixed since then. But a bug is a bug - under the changed specification where only the version is reported, that Mesa version would probably have reported "1.4" instead of "2.1". Then you wouldn't even have been able to add your workaround.

    The fglrx version string looks perfectly okay.

    The indirect version string, it's hard to judge what's going on from a distance.

    Another possible explanation is that when you chose Mesa/soft, you actually got an indirect rendering context instead of a direct rendering software context. Then it might have been a libGL bug. Or maybe it wasn't a bug at all because not all parts of OpenGL 2.1 were properly supported in the GLX protocol? Maybe you were simply lucky in that the subset you used worked correctly.

    In any case, this evidence tends to be in favor of having a non-restricted version string.

    Leave a comment:


  • V!NCENT
    replied
    Originally posted by deanjo View Post
    Only certain types off applications (applications that benefit from parallel data operations) can be sped up and that of course is if that application is coded to take advantage of openCL (and coded in a manner that it actually doesn't hurt performance).
    Do you mean the extra CPU burden/time for getting the compute kernel at the graphics card and back again has to be less than letting the CPU do the calculation(s) itself?

    Leave a comment:

Working...
X