Page 13 of 13 FirstFirst ... 3111213
Results 121 to 123 of 123

Thread: Former AMD Developer: OpenGL Is Broken

  1. #121
    Join Date
    Oct 2008
    Posts
    3,247

    Default

    Quote Originally Posted by Kraut View Post
    Most smaller changes won't need a rebuilding of the shader binarys. Also if we ever end up with matured OSS drivers the frequency of changes that need rebuilding will drop dramatically.
    And I don't think YOUR driver updating behavior is a good argument against shader binarys.
    The mesa implementation checks the git SHA1 tag and will automatically clear out anything that was cached against another version, no matter how small and insignificant the change was.

    The reason is that it's hard to tell if a change is important or not without someone carefully reviewing what has been changed, and no one wants to do that for every single commit that goes into Mesa. So they just always automatically wipe it out even when it's not necessary.

    That the cost of parsing GLSL is insignificant. I like to quote one of the AMD developer on that, but can't find his comment anymore.
    The cost is implementation specific. The mesa drivers are fairly slow at compiling. Some drivers are very slow with certain features that may be in your shaders, and very fast otherwise. That's part of the problem, you can't always assume that your shader compile will be fast, without testing it out on multiple hardware and driver versions.
    Last edited by smitty3268; 06-07-2014 at 09:13 PM.

  2. #122
    Join Date
    Jun 2010
    Posts
    172

    Default

    OpenGL is not broken, and if AMD have anything to complain about it's their implementation.

    But, OpenGL is outdated, the same way Direct3D, OpenCL, Mantle and Metal are outdated and old style APIs. They all work by sending thousands of small requests from the CPU to the GPU. Reducing the API overhead only helps performing some more API calls, but still doesn't allow us to utilize the GPUs efficiently.

    What we really need is a lowlevel universal GPU programming language where we can implement the graphics pipeline ourselves, or do compute. 6-7 rigid shader types in a pipeline accessing predefined data is inefficient. "Bindless graphics" extensions with pointers and customized data structures is a step in the right direction. Features in CUDA like controlling threads, transferring data from the GPU, etc. are getting close.

    With a low level language, driver development will be easier, and anyone can create their framework on top of it. Heck, even Apple can create their own "Metal" on top of that...

  3. #123
    Join Date
    Jan 2013
    Posts
    53

    Default

    Quote Originally Posted by efikkan View Post
    OpenGL is not broken, and if AMD have anything to complain about it's their implementation.

    But, OpenGL is outdated, the same way Direct3D, OpenCL, Mantle and Metal are outdated and old style APIs. They all work by sending thousands of small requests from the CPU to the GPU. Reducing the API overhead only helps performing some more API calls, but still doesn't allow us to utilize the GPUs efficiently.

    What we really need is a lowlevel universal GPU programming language where we can implement the graphics pipeline ourselves, or do compute. 6-7 rigid shader types in a pipeline accessing predefined data is inefficient. "Bindless graphics" extensions with pointers and customized data structures is a step in the right direction. Features in CUDA like controlling threads, transferring data from the GPU, etc. are getting close.

    With a low level language, driver development will be easier, and anyone can create their framework on top of it. Heck, even Apple can create their own "Metal" on top of that...
    The APIs don't specify how exactly the commands are send to the GPUs.
    Most OpenGL/Direct3D implementations collect your drawcalls and other commands. After some accumulation they send this package to a driver thread. This driver thread accumulates, creates and sends date packages to the GPU command queues.
    There is a lot of validation going on in between, that eats a lot of CPU time.

    We do utilize the GPUs, but with old style APIs we wast a lot of CPU power validating and not utilizing multi cores effectively at the same time. Also there is a lot of latency that does not need to exist.

    Mantle actually lets you fill the command queues directly. With a thin abstraction of course. Same goes for the DMA copy queue.
    There are no driver threads and mantle only runs in application thread(s) you called it from.
    There isn't much information about mantle out there I guess and it sure comes/will come with disadvantages. But calling it old style API makes you look like somebody not doing his homework.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •