Announcement

Collapse
No announcement yet.

GPGPU on R600 hardware, DRI2 ...

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • GPGPU on R600 hardware, DRI2 ...

    Hi everyone,

    I've a few questions about DRI2 and GPGPU on R600 hardware. (I hope this is the right place for this post)

    How is the synchronization of multiple access to the hardware handled?
    I think I read that there was a global lock in DRI-1 for hardware access.
    But if this is the case, what happens if one process, holding the lock, doesn't release it?
    How does it work in a DRI-2 environment?
    Is it possible that shaders from multiple users are running at the same time?

    How is the hardware-context managed for multiple users?
    By hardware-context I mean (registers, memory ...).
    Every gpu-user has it's own state (for example different values in the PGM_START_* registers on R600 ...).
    Could a shader-program access data (textures, ...) from a different gpu-user?

    As far as I know, there is no OSS for GPGPU (at least on radeon hw) available.
    What would be necessary to do GPGPU on r600 radeon hw?
    1. generate a device-specific shader program (using texture-instructions for input and output data?)
    2. upload the shader-program and the data to the GPU using buffer-objects (via TTM/libdrm?) **
    3. creating gpu-commands which initialize the hw (setting PGM_START_* for example) - and upload them via libdrm **
    4. starting one or multiple instances of my shader-program with parameters (parameters -> because each instance should calculate something different) **
    5. waiting for completion **
    6. mapping the results in the process's address space and access the output.
    7. release all buffers.

    ** how does this work?

    Apart from communicating with the kernel (via libdrm) is there a need to work with mesa, x-server, glx ...?

    Sidenote:
    I've tried to read some mesa-code of the radeon-driver, but it confused me a bit (chip-specific- and general-code. dri-1, dri-2, glx, classic-mesa vs. gallium, winsys ...).
    Is there somewhere a documentation of the mesa-code?


    Btw: Thanks to the phoronix.com guys for their work - I follow the articles/news of phoronix regularly.

  • #2
    Here's a first attempt at answers, with the caveat that my DRI knowledge may be out of date or wrong :

    Originally posted by qwertz View Post
    How is the synchronization of multiple access to the hardware handled?
    I think I read that there was a global lock in DRI-1 for hardware access.
    But if this is the case, what happens if one process, holding the lock, doesn't release it?
    How does it work in a DRI-2 environment?
    Is it possible that shaders from multiple users are running at the same time?
    Yes, DRI1 had a global lock which was used to ensure that only one client was using the GPU at a time, and that client contexts did not get mixed up. IIRC the basic protocol was that one client program (call it A) could determine if another client had been given access since A last held the lock, and in that case A would be expected to reprogram all of the state information.

    DRI2 removed that lock, with the short term result being that each client reprograms the state each time it submits another buffer of commands, which is not very efficient. I think the direction is to move context management into the kernel driver so that the kernel can maintain most recent state for each client, keep track of which client submitted the latest request, and swap state info if required, but I don't think any work is being done in that area right now.

    Originally posted by qwertz View Post
    How is the hardware-context managed for multiple users?
    By hardware-context I mean (registers, memory ...).
    Every gpu-user has it's own state (for example different values in the PGM_START_* registers on R600 ...).
    Could a shader-program access data (textures, ...) from a different gpu-user?
    The memory manager / command submission code in the kernel checks each command buffer as it is submitted to make sure that commands only access their own buffers, but there is always a tradeoff between runtime and thoroughness of checking I guess.

    Originally posted by qwertz View Post
    As far as I know, there is no OSS for GPGPU (at least on radeon hw) available.
    What would be necessary to do GPGPU on r600 radeon hw?
    1. generate a device-specific shader program (using texture-instructions for input and output data?)
    2. upload the shader-program and the data to the GPU using buffer-objects (via TTM/libdrm?) **
    3. creating gpu-commands which initialize the hw (setting PGM_START_* for example) - and upload them via libdrm **
    4. starting one or multiple instances of my shader-program with parameters (parameters -> because each instance should calculate something different) **
    5. waiting for completion **
    6. mapping the results in the process's address space and access the output.
    7. release all buffers.

    ** how does this work?
    That's pretty close. Normally you use a drawing command (eg "draw a rectangle" and that drawing command then invokes your compute shader program on each pixel in the target rectangle aka results buffer. The shader program, in turn, can perform texture reads (works better with texture filtering turned off ) in order to obtain data.

    Originally posted by qwertz View Post
    Apart from communicating with the kernel (via libdrm) is there a need to work with mesa, x-server, glx ...?
    You could probably run the compute client directly over the kernel driver and libdrm. Look in the mesa/r600demo project for a simple example.

    Originally posted by qwertz View Post
    Sidenote:
    I've tried to read some mesa-code of the radeon-driver, but it confused me a bit (chip-specific- and general-code. dri-1, dri-2, glx, classic-mesa vs. gallium, winsys ...).
    Is there somewhere a documentation of the mesa-code?
    Going forward, I think the plan is still to implement an OpenCL client running over the existing/WIP Gallium3D drivers (think of Gallium3D as a standard low level hardware driver that can support a variety of APIs).

    Gallium3D drivers are maintained in the Mesa tree since GL is the first and most important client (aka "state tracker") for Gallium3D today. Look in src/gallium/drivers and src/gallium/docs in the mesa/mesa project. Probably makes sense to get your head around Gallium3D first then move out from there.

    Gallium3D clients are in 3 different locations, I guess :

    - the common Mesa code works with both the "classic" Mesa HW drivers and (via adapter code) Gallium3D drivers, look in src/mesa in the mesa/mesa project

    - most of the other state trackers are in the src/gallium/state_trackers folder in the mesa/mesa project

    - the WIP OpenCL client is in the mesa/clover project

    (when I say "look in xx/yy project" I mean browse to http://cgit.freedesktop.org/xx/yy)
    Test signature

    Comment


    • #3
      First of all, thanks for your answers.

      Originally posted by bridgman View Post
      That's pretty close. Normally you use a drawing command (eg "draw a rectangle" and that drawing command then invokes your compute shader program on each pixel in the target rectangle aka results buffer. The shader program, in turn, can perform texture reads (works better with texture filtering turned off ) in order to obtain data.
      According to r600isa.pdf compute-programs should be vertex-shader not pixel-shader. But I got the idea. Using vertex-shader means I have to send a vertex to the pipeline for every compute-program instance I want to start!?

      Is it possible to run only vertex-shaders without further processing / disabling the pixel-shaders?

      Comment

      Working...
      X