Page 7 of 7 FirstFirst ... 567
Results 61 to 66 of 66

Thread: OpenGL 4.5 Released With New Features

  1. #61
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,541

    Default

    Quote Originally Posted by log0 View Post
    Modern GPUs work by processing command buffers. Those buffers are filled by the driver using the CPU, but that is details....

    What they want is, to be able to write this command buffers directly with some kind of common GPU command language, a GPU instruction set, I guess.
    That's one of the features HSA includes -- the ability for GPUs and CPUs to queue and run commands using a common mechanism. That's one of the nice benefits of userspace queues -- they allow the GPU to self-dispatch without having to involve the CPU.

    Note that the GPU shader ISA doesn't actually go in the command buffers -- the code that runs on the shader core goes in a separate area of memory, then you queue commands to the GPU which (a) tell the GPU where to find the program code and (b) tell the GPU to dispatch that program on each element in an N-dimensional range of data.

  2. #62
    Join Date
    Jul 2010
    Posts
    520

    Default

    Quote Originally Posted by zanny View Post
    None of these are good options really. You don't want a native hardware interpreter, you just want each manufacturer to make their GPUs, release the assembly documentation, and implement, say, an LLVM module to compile it from some GPU-esque IR code like GLSL. You could do it at runtime or compile time, then.

    Basically, what the AMD LLVM module is, except instead of being GLSL -> IR > AMD ISA, it would be <common/any> GPU language -> IR (I have never read into the AMD llvm internals, and I don't know if they are using llvm IR or their own) -> <vendor> ISA.
    OpenCL has got a portable (llvm based) IR already. Now you wold need to somehow expose the fixed dunction units (tesselation, rasterization, blending) to OpenCL. But then there are also things like hiz buffers, vertex transform caches and god knows what else...

  3. #63
    Join Date
    Jul 2010
    Posts
    520

    Default

    Quote Originally Posted by bridgman View Post
    That's one of the features HSA includes -- the ability for GPUs and CPUs to queue and run commands using a common mechanism. That's one of the nice benefits of userspace queues -- they allow the GPU to self-dispatch without having to involve the CPU.

    Note that the GPU shader ISA doesn't actually go in the command buffers -- the code that runs on the shader core goes in a separate area of memory, then you queue commands to the GPU which (a) tell the GPU where to find the program code and (b) tell the GPU to dispatch that program on each element in an N-dimensional range of data.
    Yeah, the shaders as the (vertex and texture) data will be separate from the commands. I kinda skipped this part. The interesting bit for me is, whether it is at all possible to get the gpu commands, that tell the gpu what programms to run on what data, standardised.

  4. #64
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,541

    Default

    Quote Originally Posted by log0 View Post
    Yeah, the shaders as the (vertex and texture) data will be separate from the commands. I kinda skipped this part. The interesting bit for me is, whether it is at all possible to get the gpu commands, that tell the gpu what programms to run on what data, standardised.
    It will be standardized across HSA devices - look up Architected Queueing Language (AQL).

  5. #65
    Join Date
    Jul 2008
    Posts
    891

    Default

    So ok Nvidia writes a driver and then krohnos defines the features of this driver the new OpenGL version, no wonder Nvidia is always ahead of amd.

    Maybe we should rename Opengl to Nvidia-Api and port Mantle to Linux and call it AMD API, then if one of the vendors support the other vendorspecific engine somewhat ok better than the other vendor support his engine we give a some respect for that, but developers know that there is no vendor-independend engine and konw that they have to support more than one if they want the maximum on both vendors hardware.

    A few hours of course nvidia seen this new specs and implemented it in a few hours, next time jsut release it the other way around and dont pretent that the one was made before the other when its not true.

  6. #66
    Join Date
    Jan 2013
    Posts
    53

    Default

    Quote Originally Posted by blackiwid View Post
    So ok Nvidia writes a driver and then krohnos defines the features of this driver the new OpenGL version, no wonder Nvidia is always ahead of amd.

    Maybe we should rename Opengl to Nvidia-Api and port Mantle to Linux and call it AMD API, then if one of the vendors support the other vendorspecific engine somewhat ok better than the other vendor support his engine we give a some respect for that, but developers know that there is no vendor-independend engine and konw that they have to support more than one if they want the maximum on both vendors hardware.

    A few hours of course nvidia seen this new specs and implemented it in a few hours, next time jsut release it the other way around and dont pretent that the one was made before the other when its not true.
    You may want to take a look at the contributor lists (which includes the company/organisation they are working for) in each extension specification.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •