Announcement

Collapse
No announcement yet.

OpenCL Support Atop Gallium3D Is Here, Sort Of

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by bridgman View Post
    In principle, yes. In practice I expect there might be some tweaking required for each driver, in case (for example) the new state tracker used some Gallium3D API combinations which had not been exercised before.
    Okey so basically Gallium3D exposes all 'functions' that a graphics card is capable to perform and a state-tracker can then 'dictate' these functions (which makes a state-tracker a driver for Gallium3D?)?

    Comment


    • #42
      Originally posted by V!NCENT View Post
      Okey so basically Gallium3D exposes all 'functions' that a graphics card is capable to perform and a state-tracker can then 'dictate' these functions (which makes a state-tracker a driver for Gallium3D?)?
      Quite close. All state trackers translate their command streams to a common low-level 'intermediate language' (IL). The various hardware drivers then translate this IL to a format that the hardware can understand and execute.

      The IL and the state trackers are shared between all Gallium drivers, while the hardware drivers are specific for each GPU. The idea is that this increases developer efficiency: if the IL is sufficiently abstract, then adding e.g. an OpenCL state tracker will (ideally) allow all Gallium drivers to execute OpenCL code without modifying the driver! Ditto for OpenGL 3.x, EXA, OpenVG etc etc etc.

      Comment


      • #43
        Originally posted by BlackStar View Post
        The idea is that this increases developer efficiency: if the IL is sufficiently abstract, then adding e.g. an OpenCL state tracker will (ideally) allow all Gallium drivers to execute OpenCL code without modifying the driver! Ditto for OpenGL 3.x, EXA, OpenVG etc etc etc.
        OK so basically soon the entire Linux graphical desktop (well, most parts ofcourse) will be hardware accelerated by the graphics card? OpenGL, OpenVG, OpenCL... And this is all going to be very compatible with any graphics card out there...

        So we will have an extremely fast desktop and applications (OpenCL) and less burden on the CPU which will in turn be free'ed-up (someone please correct this word for me in proper english) so we will also see a performance increase there as well?

        Man-o-man this is gonna be good

        Is the OpenCL state-tracker a library? If so then if I would want to code an app and take advantage of OpenCL than would I have to link to the OpenCL lib? And would this be the Right Thing to do?

        Comment


        • #44
          Originally posted by V!NCENT View Post
          OK so basically soon the entire Linux graphical desktop (well, most parts ofcourse) will be hardware accelerated by the graphics card? OpenGL, OpenVG, OpenCL... And this is all going to be very compatible with any graphics card out there...
          The only issue is that you need Gallium drivers. Nouveau is already focusing on Gallium and there is an experimental r300g branch for R300-R500 cards from Ati. Intel hasn't decided whether they'll ship Gallium drivers yet.

          Just note that binary drivers won't take advantage of this stack.

          Is the OpenCL state-tracker a library? If so then if I would want to code an app and take advantage of OpenCL than would I have to link to the OpenCL lib? And would this be the Right Thing to do?
          Right now, every vendor ships its own OpenCL libraries. You can download an implementation from Ati that runs on the CPU or request access to an implementation from Nvidia that runs on the GPU. AFAIK, OpenCL through Gallium is not available yet.

          The only difficulty is that there is no common OpenCL library as there is for OpenGL (you link -lGL and don't care who implements it). However, as long as there are no ABI issues, you should be able to code your app using a specific OpenCL library and run it on another.

          Comment


          • #45
            Originally posted by V!NCENT View Post
            So we will have an extremely fast desktop and applications (OpenCL) and less burden on the CPU which will in turn be free'ed-up (someone please correct this word for me in proper english) so we will also see a performance increase there as well?
            Only certain types off applications (applications that benefit from parallel data operations) can be sped up and that of course is if that application is coded to take advantage of openCL (and coded in a manner that it actually doesn't hurt performance).

            Comment


            • #46
              Originally posted by BlackStar View Post
              This code will work correctly iff the driver follows the revised OpenGL 3.0 specs for glGetString and returns the version directly, i.e. "2.1".

              Right now, Mesa returns a string in the form "1.4 (Mesa 2.1)". Your code will parse this as (major, minor) = (1, 4), when the actual OpenGL version is 2.1 (1.4 is the server version, IIRC).
              You are absolutely mistaken.

              The string returned is "<OpenGL version> Mesa <Mesa version>". The string in the sample code which I posted above is what is returned by the r300 driver: "1.5 Mesa 7.6-devel". This means that the supported OpenGL version is 1.5, provided by a Mesa 7.6 development version.

              This is absolutely parsed correctly by both the atoi code I posted and the code that GLEE uses.

              Note that I'm talking about the string returned by glGetString(GL_VERSION). This is the final supported OpenGL version, which has nothing to do with the GLX version or with how the OpenGL version is negotiated conceptually between client and server.

              Edit: Oh, and there may be some strings in glxinfo which suggest something like Mesa supports OpenGL 2.1 (I can't test right now, not at home) - which is true, but completely beside the point. As long as the hardware driver only supports OpenGL 1.5, OpenGL 1.5 is what you will get, and what the version string correctly tells you. If you try to call 2.x functions, your program will crash. So again, everybody has always parsed the OpenGL version string like this and it has always been correct. I'm really curious where this misconception comes from.
              Last edited by nhaehnle; 04 September 2009, 10:06 AM.

              Comment


              • #47
                Originally posted by nhaehnle View Post
                You are absolutely mistaken.

                The string returned is "<OpenGL version> Mesa <Mesa version>". The string in the sample code which I posted above is what is returned by the r300 driver: "1.5 Mesa 7.6-devel". This means that the supported OpenGL version is 1.5, provided by a Mesa 7.6 development version.

                This is absolutely parsed correctly by both the atoi code I posted and the code that GLEE uses.
                Checking back to my code comments from 2007, I found the following:

                On Mesa 7.0.1, Mesa/soft returns: "1.4 (2.1 Mesa 7.0.1)". On the same hardware (R500), install fglrx 8.2 and you get "2.1.7281 ...". Change to indirect and you get "1.4 (2.1.7281 ...)"

                So how do you interpret those strings? 1.4 only makes sense as the server version, because Mesa 7.0 sure as hell isn't limited to 1.4 in software rendering - and neither is R500 w/ fglrx.

                Misconception? Maybe.

                For the record, I *did* add a workaround to parse the "2.1" part of the strings above and the program worked correctly.
                Last edited by BlackStar; 04 September 2009, 10:55 AM.

                Comment


                • #48
                  Originally posted by deanjo View Post
                  Only certain types off applications (applications that benefit from parallel data operations) can be sped up and that of course is if that application is coded to take advantage of openCL (and coded in a manner that it actually doesn't hurt performance).
                  Do you mean the extra CPU burden/time for getting the compute kernel at the graphics card and back again has to be less than letting the CPU do the calculation(s) itself?

                  Comment


                  • #49
                    Originally posted by BlackStar View Post
                    Checking back to my code comments from 2007, I found the following:

                    On Mesa 7.0.1, Mesa/soft returns: "1.4 (2.1 Mesa 7.0.1)". On the same hardware (R500), install fglrx 8.2 and you get "2.1.7281 ...". Change to indirect and you get "1.4 (2.1.7281 ...)"

                    So how do you interpret those strings? 1.4 only makes sense as the server version, because Mesa 7.0 sure as hell isn't limited to 1.4 in software rendering - and neither is R500 w/ fglrx.
                    The version reported by Mesa/soft looks wrong. This may have been a Mesa bug, which has clearly been fixed since then. But a bug is a bug - under the changed specification where only the version is reported, that Mesa version would probably have reported "1.4" instead of "2.1". Then you wouldn't even have been able to add your workaround.

                    The fglrx version string looks perfectly okay.

                    The indirect version string, it's hard to judge what's going on from a distance.

                    Another possible explanation is that when you chose Mesa/soft, you actually got an indirect rendering context instead of a direct rendering software context. Then it might have been a libGL bug. Or maybe it wasn't a bug at all because not all parts of OpenGL 2.1 were properly supported in the GLX protocol? Maybe you were simply lucky in that the subset you used worked correctly.

                    In any case, this evidence tends to be in favor of having a non-restricted version string.

                    Comment


                    • #50
                      Originally posted by V!NCENT View Post
                      Do you mean the extra CPU burden/time for getting the compute kernel at the graphics card and back again has to be less than letting the CPU do the calculation(s) itself?
                      No more that the algorithm itself has to be of a parallel nature. For example there are many ways of calculating pi and some algorithms are of a serial nature others are of a parallel nature. You could get these algorithms running on a GPU, but the results maybe slower because the codes serial nature, if you switch to an algorithm that is parallel in nature you can have massive speed gains.

                      Comment

                      Working...
                      X