Announcement

Collapse
No announcement yet.

OpenCL Support Atop Gallium3D Is Here, Sort Of

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #46
    Originally posted by BlackStar View Post
    This code will work correctly iff the driver follows the revised OpenGL 3.0 specs for glGetString and returns the version directly, i.e. "2.1".

    Right now, Mesa returns a string in the form "1.4 (Mesa 2.1)". Your code will parse this as (major, minor) = (1, 4), when the actual OpenGL version is 2.1 (1.4 is the server version, IIRC).
    You are absolutely mistaken.

    The string returned is "<OpenGL version> Mesa <Mesa version>". The string in the sample code which I posted above is what is returned by the r300 driver: "1.5 Mesa 7.6-devel". This means that the supported OpenGL version is 1.5, provided by a Mesa 7.6 development version.

    This is absolutely parsed correctly by both the atoi code I posted and the code that GLEE uses.

    Note that I'm talking about the string returned by glGetString(GL_VERSION). This is the final supported OpenGL version, which has nothing to do with the GLX version or with how the OpenGL version is negotiated conceptually between client and server.

    Edit: Oh, and there may be some strings in glxinfo which suggest something like Mesa supports OpenGL 2.1 (I can't test right now, not at home) - which is true, but completely beside the point. As long as the hardware driver only supports OpenGL 1.5, OpenGL 1.5 is what you will get, and what the version string correctly tells you. If you try to call 2.x functions, your program will crash. So again, everybody has always parsed the OpenGL version string like this and it has always been correct. I'm really curious where this misconception comes from.
    Last edited by nhaehnle; 09-04-2009, 10:06 AM.

    Comment


    • #47
      Originally posted by nhaehnle View Post
      You are absolutely mistaken.

      The string returned is "<OpenGL version> Mesa <Mesa version>". The string in the sample code which I posted above is what is returned by the r300 driver: "1.5 Mesa 7.6-devel". This means that the supported OpenGL version is 1.5, provided by a Mesa 7.6 development version.

      This is absolutely parsed correctly by both the atoi code I posted and the code that GLEE uses.
      Checking back to my code comments from 2007, I found the following:

      On Mesa 7.0.1, Mesa/soft returns: "1.4 (2.1 Mesa 7.0.1)". On the same hardware (R500), install fglrx 8.2 and you get "2.1.7281 ...". Change to indirect and you get "1.4 (2.1.7281 ...)"

      So how do you interpret those strings? 1.4 only makes sense as the server version, because Mesa 7.0 sure as hell isn't limited to 1.4 in software rendering - and neither is R500 w/ fglrx.

      Misconception? Maybe.

      For the record, I *did* add a workaround to parse the "2.1" part of the strings above and the program worked correctly.
      Last edited by BlackStar; 09-04-2009, 10:55 AM.

      Comment


      • #48
        Originally posted by deanjo View Post
        Only certain types off applications (applications that benefit from parallel data operations) can be sped up and that of course is if that application is coded to take advantage of openCL (and coded in a manner that it actually doesn't hurt performance).
        Do you mean the extra CPU burden/time for getting the compute kernel at the graphics card and back again has to be less than letting the CPU do the calculation(s) itself?

        Comment


        • #49
          Originally posted by BlackStar View Post
          Checking back to my code comments from 2007, I found the following:

          On Mesa 7.0.1, Mesa/soft returns: "1.4 (2.1 Mesa 7.0.1)". On the same hardware (R500), install fglrx 8.2 and you get "2.1.7281 ...". Change to indirect and you get "1.4 (2.1.7281 ...)"

          So how do you interpret those strings? 1.4 only makes sense as the server version, because Mesa 7.0 sure as hell isn't limited to 1.4 in software rendering - and neither is R500 w/ fglrx.
          The version reported by Mesa/soft looks wrong. This may have been a Mesa bug, which has clearly been fixed since then. But a bug is a bug - under the changed specification where only the version is reported, that Mesa version would probably have reported "1.4" instead of "2.1". Then you wouldn't even have been able to add your workaround.

          The fglrx version string looks perfectly okay.

          The indirect version string, it's hard to judge what's going on from a distance.

          Another possible explanation is that when you chose Mesa/soft, you actually got an indirect rendering context instead of a direct rendering software context. Then it might have been a libGL bug. Or maybe it wasn't a bug at all because not all parts of OpenGL 2.1 were properly supported in the GLX protocol? Maybe you were simply lucky in that the subset you used worked correctly.

          In any case, this evidence tends to be in favor of having a non-restricted version string.

          Comment


          • #50
            Originally posted by V!NCENT View Post
            Do you mean the extra CPU burden/time for getting the compute kernel at the graphics card and back again has to be less than letting the CPU do the calculation(s) itself?
            No more that the algorithm itself has to be of a parallel nature. For example there are many ways of calculating pi and some algorithms are of a serial nature others are of a parallel nature. You could get these algorithms running on a GPU, but the results maybe slower because the codes serial nature, if you switch to an algorithm that is parallel in nature you can have massive speed gains.

            Comment


            • #51
              Originally posted by nhaehnle View Post
              In any case, this evidence tends to be in favor of having a non-restricted version string.
              On the contrary, this proves why glGetInteger(GL_MAJOR) and glGetInteger(GL_MINOR) are superior to any solution involving string parsing.

              These methods can be trivially supported in Mesa 7.6 without advertizing OpenGL 3.0. Nvidia and Ati binary drivers expose these methods even on 2.1 contexts - there's no reason why Mesa cannot do the same.

              Edit: Digging around, I think this post is the root of the GL_VERSION issue. The first number in "1.4 (2.1 Mesa 7.0.1)" is the highest OpenGL version that can be officially supported under the current GLX implementation when using indirect rendering. Which means that I either hit a Mesa bug or was just plain lucky by only using supported methods.

              Still, the whole confusion wouldn't even exist with glGetInteger(GL_[MAJOR|MINOR]).
              Last edited by BlackStar; 09-04-2009, 12:52 PM.

              Comment


              • #52
                Originally posted by deanjo View Post
                No more that the algorithm itself has to be of a parallel nature. For example there are many ways of calculating pi and some algorithms are of a serial nature others are of a parallel nature. You could get these algorithms running on a GPU, but the results maybe slower because the codes serial nature, if you switch to an algorithm that is parallel in nature you can have massive speed gains.
                Or, of course, if you have to do a lot of independent serial calculations and like to do as much of them at the same time as possible... which is why OpenCL interests me.

                Comment


                • #53
                  Originally posted by BlackStar View Post
                  Edit: Digging around, I think this post is the root of the GL_VERSION issue. The first number in "1.4 (2.1 Mesa 7.0.1)" is the highest OpenGL version that can be officially supported under the current GLX implementation when using indirect rendering. Which means that I either hit a Mesa bug or was just plain lucky by only using supported methods.
                  Okay, that does make a lot of sense.

                  Still, the whole confusion wouldn't even exist with glGetInteger(GL_[MAJOR|MINOR]).
                  You do realize that when that is implemented, glGetInteger(GL_MAJOR|GL_MINOR) would return version 1.4 in the situation where the GL_VERSION string is "1.4 (2.1 ...)", right? You would get exactly what you get with atoi or GLEE-style parsing.

                  So yeah, it might be a little more convenient for application developers, but it wouldn't actually change anything.

                  Comment


                  • #54
                    Hmm, as in with the principle GL_MINOR=1, GL_MINOR=2, GL_MAJOR|GL_MINOR=3, then pull first part for 1, second part for 2, full version with?
                    Last edited by nanonyme; 09-07-2009, 12:56 PM.

                    Comment


                    • #55
                      Originally posted by nanonyme View Post
                      Hmm, as in with the principle GL_MINOR=1, GL_MINOR=2, GL_MAJOR|GL_MINOR=3, then pull first part for 1, second part for 2, full version with?
                      Huh?


                      (I need to write at least 10 characters, so yeah: can you elaborate? Because I totally didn't understand your post)

                      Comment


                      • #56
                        I guess mostly what I had in mind was eg OpenGL version 1.5.x, glGetVersion(GL_MAJOR) returns 1, glGetVersion(GL_MINOR) returns 5.x, glGetVersion(GL_MAJOR|GL_MINOR) returns 1.5.x (as in, bitwise or on 01(bin) and 10(bin) => 11(bin))
                        ps. This is useless.

                        Comment

                        Working...
                        X