Announcement

Collapse
No announcement yet.

Why are graphics as complicated as they are?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Why are graphics as complicated as they are?

    I'm not talking about having in-kernel code and user-space code.

    Why do we have the CPU process SO MUCH of our graphics? Even when using OpenGL, a graphics API meant to talk to the GFX card, half the time it just sends that to the CPU unless specified to use "hardware acceleration". Can anybody tell me why we don't use the GFX card for ALL graphics rendering and leave the CPU out of it? From what I've been told, GFX cards have low-power states that they could use to render desktops and stuff.

    This is a serious question that I just thought of at 12:30am, but feel free to rip it to shreds in the interest of educating me.

  • #2
    Originally posted by Daktyl198 View Post
    Even when using OpenGL, a graphics API meant to talk to the GFX card, half the time it just sends that to the CPU unless specified to use "hardware acceleration".
    AFAIK this is simply not the case. There is a fair amount of CPU processing required to translate from HW-independent OpenGL API abstractions to a specific piece of GPU hardware, but normally all of the actual graphics processing/rendering requested via OpenGL *is* done on the GPU.

    The bigger issue here is that as graphics hardware evolves the optimal graphics API changes as well. That's one of the main reasons you see new APIs and new versions of existing APIs being introduced -- so that an application making full use of the new API mechanisms can operate with lower overhead on modern hardware.

    (the other main reason is to allow more complex and useful processing activities to be offloaded to the GPU, of course)
    Test signature

    Comment


    • #3
      Originally posted by bridgman View Post
      AFAIK this is simply not the case. There is a fair amount of CPU processing required to translate from HW-independent OpenGL API abstractions to a specific piece of GPU hardware
      I don't get this either... why can't we map the API calls to the specific hardware at boot or through a function (for hot-swapping/other GPU changes) and store the result, that way we don't spend precious CPU cycles translating all the time?

      Originally posted by bridgman View Post
      (the other main reason is to allow more complex and useful processing activities to be offloaded to the GPU, of course)
      90% of which is done when the Desktop is not in view (Games, full-screen videos, etc) or don't take a lot of processing power (hardware accelerated web browser/applications), so the argument still stands that the GPU could be used instead of the CPU

      Comment


      • #4
        Originally posted by Daktyl198 View Post
        I don't get this either... why can't we map the API calls to the specific hardware at boot or through a function (for hot-swapping/other GPU changes) and store the result, that way we don't spend precious CPU cycles translating all the time?
        Because the hw does not implement them, it's much lower level. Last time hw implemented them was Glide, and you know what happened to it.

        Comment


        • #5
          But now, we're returning back to the days of Glide. AMD's Mantle is low level programming and even the next DirectX 12 will introduce it.

          Comment


          • #6
            Originally posted by Mereo View Post
            But now, we're returning back to the days of Glide. AMD's Mantle is low level programming and even the next DirectX 12 will introduce it.
            DX will never be as low level as mantle if it wants to effectively and sanely support a more diverse set of hardware. Like curaga said, we know what happened to glide. It was designed for a specific hardware architecture and simply wasn't flexible enough to serve as a generic graphics api in the long run. The way OpenGL is moving (and has already moved) to tackle problems with overhead seems more sustainable and will certainly be good enough to make mantle pretty much irrelevant.

            I wonder what happens to your mantle-based games in the future when gcn is dead and buried? Oh wait, I don't, because no developer hates money enough to make their games exclusively support mantle.

            Comment


            • #7
              Why do anything?

              I think the answer to your question is pretty much: "because people want the absolute best graphics possible". You don't have to use a GPU, just like you don't have to use the CPU. You get the best results when everything is maxed out, though.

              So, if you want top-end graphics you'll have to deal with complexity. If you want to write something that looks like Pong... well you can do that pretty easily with modern computer hardware.

              From a technical perspective, if your scene is pretty static, you can push most of the rendering to the GPU. The CPU needs to set stuff up correctly but then the GPU will just draw it. The problems start when your scene starts moving - then you'll need the CPU to do some additional smarts to squeeze that extra 50% from your GPU.

              Comment


              • #8
                Originally posted by Daktyl198 View Post
                I'm not talking about having in-kernel code and user-space code.

                Why do we have the CPU process SO MUCH of our graphics?
                (disclaimer: i'm not an expert)
                i'm guessing here
                you are thinking about absolute efficiency

                thing is gpu's are nowadays just a bunch of compute units orchestrated by a control unit (theres more ofc)
                compute units are simple things
                with that kind of design gpu's are not limited to doing just one specific kind of "rendering"
                (rendering 3D is just a bunch of mathematical transforms with some logic in the mix)

                so a gpu driver is basically a state machine that says to the hardware (firmware in this case) what should be done


                also about the cpu part in it
                even in a case of something simple like a desktop or a window with some buttons or something you still need the logic behind it

                like when you move a window
                you have to calculate where it is moved
                check, based on rules, things like if you move it to an edge do you flip to the next virtual desktop (etc)

                but that is simple
                in complex graphics for example you don't want the gpu to draw the whole huge world
                so you cull everything not seen
                you do it on the cpu because you have to know in advance what you will be rendering (to not send textures when not needed, vertices, etc.)
                (this is also required for a desktop if you want lower gpu memory usage)


                still i like the idea of directly controlling the gpu
                i read something that in the future (or maybe even now in new opengl) you will be able to get a pointer in gpu memory

                also i think gpus are going in the direction of having a dedicated cpu (like ARM or something) on them that would control it
                imagine you could write full fledged programs to run on a massively parallel gpu (like semi touring-complete shaders)

                in my eyes the future looks bright in the gpu department

                edit: in short; they are complicated so they don't become even more complicated, but they are simplifying slowly (in general design)
                Last edited by gens; 25 March 2014, 09:23 PM.

                Comment


                • #9
                  just for fun

                  if you like banging your head against an invisible wall
                  try to do the aforementioned 3D transforms on paper
                  (wait till you get to the conclusion that quaternions are good... that's a shock and a half)

                  don't forget to rasterize a textured triangle with anisotropic filtering (on paper ofc)


                  (i'd rather recommend raytracing, it is much simpler)
                  Last edited by gens; 25 March 2014, 09:39 PM.

                  Comment


                  • #10
                    As I would say on the slashdot...

                    Originally posted by gens View Post
                    just for fun

                    if you like banging your head against an invisible wall
                    try to do the aforementioned 3D transforms on paper
                    (wait till you get to the conclusion that quaternions are good... that's a shock and a half)

                    don't forget to rasterize a textured triangle with anisotropic filtering (on paper ofc)


                    (i'd rather recommend raytracing, it is much simpler)
                    Mod parent up!

                    Comment

                    Working...
                    X