Announcement

Collapse
No announcement yet.

OpenGL 4.5 Released With New Features

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    If there was a new graphics API, how would it affect the free mesa drivers? I mean, would they have a harder time keeping up might a simpler API actually make it easier to keep up?

    Comment


    • #52
      Originally posted by zanny View Post
      You want GPU assembly. I do too. We already have it, though - Intel and AMD publish their ISA's. But Nvidia does not, and obfuscates their microcode, and makes it a pain in the ass to really know how the GPU even works. And all the ARM vendors are even worse.

      It is insane to think you have computers, which modern GPUs pretty much are - self contained computers - where the instruction set is not public.

      Because GPU's should really be behaving like CPUs now. Instruction sets with extensions (preferably a shared common ISA, which each vendor creating and proposing extensions... like how OpenGL is run) that the community and companies can just build compilers for. And then we could write whatever higher level languages we want on top of the GPU, and let those compete on their own.

      The problem right now really is a lack of competition. If nobody can see the ISA, nobody can emit shaders besides the proprietary driver, and thus nobody can create alternative implementations.
      Wait, are you saying that it's possible to run a computer using just a GPU, with no CPU? Like, BIOS, OS, and then Application?

      Comment


      • #53
        Originally posted by profoundWHALE View Post
        Wait, are you saying that it's possible to run a computer using just a GPU, with no CPU? Like, BIOS, OS, and then Application?
        There's a huge gap between them. If a GPU can run an OS "by itself", it's probably a SoC.

        Comment


        • #54
          Originally posted by zanny View Post
          You want GPU assembly. I do too. We already have it, though - Intel and AMD publish their ISA's. But Nvidia does not, and obfuscates their microcode, and makes it a pain in the ass to really know how the GPU even works. And all the ARM vendors are even worse.

          It is insane to think you have computers, which modern GPUs pretty much are - self contained computers - where the instruction set is not public.
          Well actually not, I'm talking of a low level programming language like a superset of GLSL in CUDA-style.

          We need to remove the bootleneck through the driver, and make the shader programs control the pipeline, which is the next logical step of GPU programming innovation. Most of the functionality is already present in the latest Nvidia GPUs.

          And BTW, Nvidia does supply GPU assembly.

          Comment


          • #55
            Originally posted by efikkan View Post
            Well actually not, I'm talking of a low level programming language like a superset of GLSL in CUDA-style.

            We need to remove the bootleneck through the driver, and make the shader programs control the pipeline, which is the next logical step of GPU programming innovation. Most of the functionality is already present in the latest Nvidia GPUs.

            And BTW, Nvidia does supply GPU assembly.
            Its a restricted subset as a GL extension to write arbitrary shaders with. Hardly control of the device. I mean documentation like a book on your ISA from AMD or Intel:




            Also, having another standard high level language that vendors implement through proprietary interfaces to obfuscated hardware implementations is ignoring the broader problem of why GPUs are in such bad shape. It is because nobody can compete with the status quo implementations through merit because all these proprietary drivers mean you cannot implement your own API. If I were to tell you you cannot program for x86 in C, you must use Fortran, because that is our sanctioned language and the ISA is a trade secret so we supply the only (proprietary) compiler, you would think it insane.

            Comment


            • #56
              Originally posted by rice_nine View Post
              There's a huge gap between them. If a GPU can run an OS "by itself", it's probably a SoC.
              Well, even most of our CPUs now are SoCs, so that's not the important thing.

              What I got from you guys talking is that I can run code that doesn't need to be processed first by a CPU- I can have a GPU do it all as long as I write it in Assembly. The issue is that it's a huge pain to write things in assembly and it's just easier if you have a CPU that's interpreting a language to assembly to execute onto the GPU, correct? Or, have the GPU natively execute a language like C.

              Comment


              • #57
                Originally posted by profoundWHALE View Post
                Wait, are you saying that it's possible to run a computer using just a GPU, with no CPU? Like, BIOS, OS, and then Application?
                The BCM VideoCore 4 (e.g. in the RPi) actually includes code that sets up the ARM core during boot, so yeah, at least that GPU probably could

                Comment


                • #58
                  Originally posted by profoundWHALE View Post
                  Well, even most of our CPUs now are SoCs, so that's not the important thing.

                  What I got from you guys talking is that I can run code that doesn't need to be processed first by a CPU- I can have a GPU do it all as long as I write it in Assembly. The issue is that it's a huge pain to write things in assembly and it's just easier if you have a CPU that's interpreting a language to assembly to execute onto the GPU, correct? Or, have the GPU natively execute a language like C.
                  Modern GPUs work by processing command buffers. Those buffers are filled by the driver using the CPU, but that is details....

                  What they want is, to be able to write this command buffers directly with some kind of common GPU command language, a GPU instruction set, I guess.

                  Comment


                  • #59
                    Or, have the GPU natively execute a language like C.
                    None of these are good options really. You don't want a native hardware interpreter, you just want each manufacturer to make their GPUs, release the assembly documentation, and implement, say, an LLVM module to compile it from some GPU-esque IR code like GLSL. You could do it at runtime or compile time, then.

                    Basically, what the AMD LLVM module is, except instead of being GLSL -> IR > AMD ISA, it would be <common/any> GPU language -> IR (I have never read into the AMD llvm internals, and I don't know if they are using llvm IR or their own) -> <vendor> ISA.

                    Comment


                    • #60
                      Originally posted by zanny View Post
                      Its a restricted subset as a GL extension to write arbitrary shaders with. Hardly control of the device. I mean documentation like a book on your ISA from AMD or Intel:




                      Also, having another standard high level language that vendors implement through proprietary interfaces to obfuscated hardware implementations is ignoring the broader problem of why GPUs are in such bad shape. It is because nobody can compete with the status quo implementations through merit because all these proprietary drivers mean you cannot implement your own API. If I were to tell you you cannot program for x86 in C, you must use Fortran, because that is our sanctioned language and the ISA is a trade secret so we supply the only (proprietary) compiler, you would think it insane.
                      I can agree we would benefit from having more documentation on GPUs from Nvidia.

                      This point aside, like I've mentioned, we need a universal low level shading language(not assembly) to replace the old OpenGL/Direct3D/Mantle/Glide/etc. style of manipulating graphics through a bunch of API calls to the driver. Doing lots of object manipulations in todays APIs required an enormous amount of API-calls. Reducing the API overhead helps only a little bit. Being able to control the pipeline directly from the GPU would eliminate this bottleneck. Having a compact universal shading language and a small API would greatly speed up support across different drivers, instead of the complex extensions in OpenGL.
                      Last edited by efikkan; 12 August 2014, 04:28 PM.

                      Comment

                      Working...
                      X