Announcement

Collapse
No announcement yet.

OpenCL Support Atop Gallium3D Is Here, Sort Of

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • nhaehnle
    replied
    Originally posted by BlackStar View Post
    This code will work correctly iff the driver follows the revised OpenGL 3.0 specs for glGetString and returns the version directly, i.e. "2.1".

    Right now, Mesa returns a string in the form "1.4 (Mesa 2.1)". Your code will parse this as (major, minor) = (1, 4), when the actual OpenGL version is 2.1 (1.4 is the server version, IIRC).
    You are absolutely mistaken.

    The string returned is "<OpenGL version> Mesa <Mesa version>". The string in the sample code which I posted above is what is returned by the r300 driver: "1.5 Mesa 7.6-devel". This means that the supported OpenGL version is 1.5, provided by a Mesa 7.6 development version.

    This is absolutely parsed correctly by both the atoi code I posted and the code that GLEE uses.

    Note that I'm talking about the string returned by glGetString(GL_VERSION). This is the final supported OpenGL version, which has nothing to do with the GLX version or with how the OpenGL version is negotiated conceptually between client and server.

    Edit: Oh, and there may be some strings in glxinfo which suggest something like Mesa supports OpenGL 2.1 (I can't test right now, not at home) - which is true, but completely beside the point. As long as the hardware driver only supports OpenGL 1.5, OpenGL 1.5 is what you will get, and what the version string correctly tells you. If you try to call 2.x functions, your program will crash. So again, everybody has always parsed the OpenGL version string like this and it has always been correct. I'm really curious where this misconception comes from.
    Last edited by nhaehnle; 04 September 2009, 10:06 AM.

    Leave a comment:


  • deanjo
    replied
    Originally posted by V!NCENT View Post
    So we will have an extremely fast desktop and applications (OpenCL) and less burden on the CPU which will in turn be free'ed-up (someone please correct this word for me in proper english) so we will also see a performance increase there as well?
    Only certain types off applications (applications that benefit from parallel data operations) can be sped up and that of course is if that application is coded to take advantage of openCL (and coded in a manner that it actually doesn't hurt performance).

    Leave a comment:


  • BlackStar
    replied
    Originally posted by V!NCENT View Post
    OK so basically soon the entire Linux graphical desktop (well, most parts ofcourse) will be hardware accelerated by the graphics card? OpenGL, OpenVG, OpenCL... And this is all going to be very compatible with any graphics card out there...
    The only issue is that you need Gallium drivers. Nouveau is already focusing on Gallium and there is an experimental r300g branch for R300-R500 cards from Ati. Intel hasn't decided whether they'll ship Gallium drivers yet.

    Just note that binary drivers won't take advantage of this stack.

    Is the OpenCL state-tracker a library? If so then if I would want to code an app and take advantage of OpenCL than would I have to link to the OpenCL lib? And would this be the Right Thing to do?
    Right now, every vendor ships its own OpenCL libraries. You can download an implementation from Ati that runs on the CPU or request access to an implementation from Nvidia that runs on the GPU. AFAIK, OpenCL through Gallium is not available yet.

    The only difficulty is that there is no common OpenCL library as there is for OpenGL (you link -lGL and don't care who implements it). However, as long as there are no ABI issues, you should be able to code your app using a specific OpenCL library and run it on another.

    Leave a comment:


  • V!NCENT
    replied
    Originally posted by BlackStar View Post
    The idea is that this increases developer efficiency: if the IL is sufficiently abstract, then adding e.g. an OpenCL state tracker will (ideally) allow all Gallium drivers to execute OpenCL code without modifying the driver! Ditto for OpenGL 3.x, EXA, OpenVG etc etc etc.
    OK so basically soon the entire Linux graphical desktop (well, most parts ofcourse) will be hardware accelerated by the graphics card? OpenGL, OpenVG, OpenCL... And this is all going to be very compatible with any graphics card out there...

    So we will have an extremely fast desktop and applications (OpenCL) and less burden on the CPU which will in turn be free'ed-up (someone please correct this word for me in proper english) so we will also see a performance increase there as well?

    Man-o-man this is gonna be good

    Is the OpenCL state-tracker a library? If so then if I would want to code an app and take advantage of OpenCL than would I have to link to the OpenCL lib? And would this be the Right Thing to do?

    Leave a comment:


  • BlackStar
    replied
    Originally posted by V!NCENT View Post
    Okey so basically Gallium3D exposes all 'functions' that a graphics card is capable to perform and a state-tracker can then 'dictate' these functions (which makes a state-tracker a driver for Gallium3D?)?
    Quite close. All state trackers translate their command streams to a common low-level 'intermediate language' (IL). The various hardware drivers then translate this IL to a format that the hardware can understand and execute.

    The IL and the state trackers are shared between all Gallium drivers, while the hardware drivers are specific for each GPU. The idea is that this increases developer efficiency: if the IL is sufficiently abstract, then adding e.g. an OpenCL state tracker will (ideally) allow all Gallium drivers to execute OpenCL code without modifying the driver! Ditto for OpenGL 3.x, EXA, OpenVG etc etc etc.

    Leave a comment:


  • V!NCENT
    replied
    Originally posted by bridgman View Post
    In principle, yes. In practice I expect there might be some tweaking required for each driver, in case (for example) the new state tracker used some Gallium3D API combinations which had not been exercised before.
    Okey so basically Gallium3D exposes all 'functions' that a graphics card is capable to perform and a state-tracker can then 'dictate' these functions (which makes a state-tracker a driver for Gallium3D?)?

    Leave a comment:


  • MrCooper
    replied
    Originally posted by bridgman View Post
    In practice I expect there might be some tweaking required for each driver, in case (for example) the new state tracker used some Gallium3D API combinations which had not been exercised before.
    That may sometimes be the case (for minimalistic drivers , plus there's always a little glue code needed (in what's called 'the winsys' for historical reasons). However, adding support for a Gallium state tracker in general only requires about as much glue code as adding support for an OpenGL extension does in a traditional Mesa driver.

    The first state tracker is Mesa itself, which exposes an OpenGL API and which can run over either the classic Mesa HW driver API or over the Gallium3D API.
    Strictly speaking, the OpenGL state tracker provides the traditional Mesa driver interface and translates that to the Gallium driver interface. For OpenGL 3.x it might make more sense to translate from OpenGL to Gallium directly though.

    Applications would not normally run over Gallium3D directly. Think of Gallium3D as a low level API which encapsulates HW details to simplify the writing of higher level drivers, ie an open and modern "hardware abstraction layer" API for GPUs. An application written for a specific state-tracker API should work on every card that has a fully implemented Gallium3D driver.
    Right, as far as apps are concerned, Gallium is an implementation detail they shouldn't need to care about.

    Leave a comment:


  • BlackStar
    replied
    Originally posted by nhaehnle View Post
    Now I'm really curious. How does atoi not do the right thing? Maybe we have different interpretations of what the natural thing to do is. Here's what I just hacked up as a simple test program:
    [...]
    That definitely parses the string correctly.
    This code will work correctly iff the driver follows the revised OpenGL 3.0 specs for glGetString and returns the version directly, i.e. "2.1".

    Right now, Mesa returns a string in the form "1.4 (Mesa 2.1)". Your code will parse this as (major, minor) = (1, 4), when the actual OpenGL version is 2.1 (1.4 is the server version, IIRC).

    GLEE falls into the same trap:
    Code:
    char* version_string = glGetString(GL_VERSION);
    int major = version_string[0];
    int minor = version_string[2];

    Leave a comment:


  • nhaehnle
    replied
    Originally posted by BlackStar View Post
    It's not that bad: using glGetInteger(GL_MAJOR) will return an INVALID_ENUM error if it's not supported and you can automatically fall back to glGetString(GL_VERSION) (it will not blow up or anything).
    Okay, that's true.

    That's the problem: atoi won't do the right thing with current Mesa! It will return the GLX version instead of the OpenGL version. Even Carmack fell for this back in the original Doom 3 release (IIRC), while popular libraries like GLEE *still* don't get this right.
    Now I'm really curious. How does atoi not do the right thing? Maybe we have different interpretations of what the natural thing to do is. Here's what I just hacked up as a simple test program:
    Code:
    #include <stdlib.h>
    #include <string.h>
    
    static const char * version = "1.5 Mesa 7.6-devel";
    
    int main()
    {
      int major = atoi(version);
      int minor = 0;
      const char * next = strchr(version, '.');
      if (next) {
        minor = atoi(next+1);
      }
      printf("%i.%i\n", major, minor);
      return 0;
    }
    That definitely parses the string correctly.

    Leave a comment:


  • bridgman
    replied
    Originally posted by V!NCENT View Post
    If, let's say, there is a Gallium3D driver for every card out there and someone makes an OpenCL state-tracker, then do all cards suddenly support OpenCL? Or does each driver needs code for each state-tracker in order to use them?
    In principle, yes. In practice I expect there might be some tweaking required for each driver, in case (for example) the new state tracker used some Gallium3D API combinations which had not been exercised before.

    A state tracker is something that sits on top of the Gallium3D API and exposes a general purpose API to applications. The first state tracker is Mesa itself, which exposes an OpenGL API and which can run over either the classic Mesa HW driver API or over the Gallium3D API.

    Originally posted by V!NCENT View Post
    And is Gallium3D an API for applications so that an application only has to be written for a state-tracker so that it works on every graphics card that has a Gallium3D driver?
    Applications would not normally run over Gallium3D directly. Think of Gallium3D as a low level API which encapsulates HW details to simplify the writing of higher level drivers, ie an open and modern "hardware abstraction layer" API for GPUs. An application written for a specific state-tracker API should work on every card that has a fully implemented Gallium3D driver.
    Last edited by bridgman; 03 September 2009, 11:53 AM.

    Leave a comment:

Working...
X