Announcement

Collapse
No announcement yet.

Why are graphics as complicated as they are?

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Why are graphics as complicated as they are?

    I'm not talking about having in-kernel code and user-space code.

    Why do we have the CPU process SO MUCH of our graphics? Even when using OpenGL, a graphics API meant to talk to the GFX card, half the time it just sends that to the CPU unless specified to use "hardware acceleration". Can anybody tell me why we don't use the GFX card for ALL graphics rendering and leave the CPU out of it? From what I've been told, GFX cards have low-power states that they could use to render desktops and stuff.

    This is a serious question that I just thought of at 12:30am, but feel free to rip it to shreds in the interest of educating me.

  • #2
    Originally posted by Daktyl198 View Post
    Even when using OpenGL, a graphics API meant to talk to the GFX card, half the time it just sends that to the CPU unless specified to use "hardware acceleration".
    AFAIK this is simply not the case. There is a fair amount of CPU processing required to translate from HW-independent OpenGL API abstractions to a specific piece of GPU hardware, but normally all of the actual graphics processing/rendering requested via OpenGL *is* done on the GPU.

    The bigger issue here is that as graphics hardware evolves the optimal graphics API changes as well. That's one of the main reasons you see new APIs and new versions of existing APIs being introduced -- so that an application making full use of the new API mechanisms can operate with lower overhead on modern hardware.

    (the other main reason is to allow more complex and useful processing activities to be offloaded to the GPU, of course)

    Comment


    • #3
      Originally posted by bridgman View Post
      AFAIK this is simply not the case. There is a fair amount of CPU processing required to translate from HW-independent OpenGL API abstractions to a specific piece of GPU hardware
      I don't get this either... why can't we map the API calls to the specific hardware at boot or through a function (for hot-swapping/other GPU changes) and store the result, that way we don't spend precious CPU cycles translating all the time?

      Originally posted by bridgman View Post
      (the other main reason is to allow more complex and useful processing activities to be offloaded to the GPU, of course)
      90% of which is done when the Desktop is not in view (Games, full-screen videos, etc) or don't take a lot of processing power (hardware accelerated web browser/applications), so the argument still stands that the GPU could be used instead of the CPU

      Comment


      • #4
        Originally posted by Daktyl198 View Post
        I don't get this either... why can't we map the API calls to the specific hardware at boot or through a function (for hot-swapping/other GPU changes) and store the result, that way we don't spend precious CPU cycles translating all the time?
        Because the hw does not implement them, it's much lower level. Last time hw implemented them was Glide, and you know what happened to it.

        Comment


        • #5
          But now, we're returning back to the days of Glide. AMD's Mantle is low level programming and even the next DirectX 12 will introduce it.

          Comment


          • #6
            Originally posted by Mereo View Post
            But now, we're returning back to the days of Glide. AMD's Mantle is low level programming and even the next DirectX 12 will introduce it.
            DX will never be as low level as mantle if it wants to effectively and sanely support a more diverse set of hardware. Like curaga said, we know what happened to glide. It was designed for a specific hardware architecture and simply wasn't flexible enough to serve as a generic graphics api in the long run. The way OpenGL is moving (and has already moved) to tackle problems with overhead seems more sustainable and will certainly be good enough to make mantle pretty much irrelevant.

            I wonder what happens to your mantle-based games in the future when gcn is dead and buried? Oh wait, I don't, because no developer hates money enough to make their games exclusively support mantle.

            Comment


            • #7
              Why do anything?

              I think the answer to your question is pretty much: "because people want the absolute best graphics possible". You don't have to use a GPU, just like you don't have to use the CPU. You get the best results when everything is maxed out, though.

              So, if you want top-end graphics you'll have to deal with complexity. If you want to write something that looks like Pong... well you can do that pretty easily with modern computer hardware.

              From a technical perspective, if your scene is pretty static, you can push most of the rendering to the GPU. The CPU needs to set stuff up correctly but then the GPU will just draw it. The problems start when your scene starts moving - then you'll need the CPU to do some additional smarts to squeeze that extra 50% from your GPU.

              Comment


              • #8
                Originally posted by Daktyl198 View Post
                I'm not talking about having in-kernel code and user-space code.

                Why do we have the CPU process SO MUCH of our graphics?
                (disclaimer: i'm not an expert)
                i'm guessing here
                you are thinking about absolute efficiency

                thing is gpu's are nowadays just a bunch of compute units orchestrated by a control unit (theres more ofc)
                compute units are simple things
                with that kind of design gpu's are not limited to doing just one specific kind of "rendering"
                (rendering 3D is just a bunch of mathematical transforms with some logic in the mix)

                so a gpu driver is basically a state machine that says to the hardware (firmware in this case) what should be done


                also about the cpu part in it
                even in a case of something simple like a desktop or a window with some buttons or something you still need the logic behind it

                like when you move a window
                you have to calculate where it is moved
                check, based on rules, things like if you move it to an edge do you flip to the next virtual desktop (etc)

                but that is simple
                in complex graphics for example you don't want the gpu to draw the whole huge world
                so you cull everything not seen
                you do it on the cpu because you have to know in advance what you will be rendering (to not send textures when not needed, vertices, etc.)
                (this is also required for a desktop if you want lower gpu memory usage)


                still i like the idea of directly controlling the gpu
                i read something that in the future (or maybe even now in new opengl) you will be able to get a pointer in gpu memory

                also i think gpus are going in the direction of having a dedicated cpu (like ARM or something) on them that would control it
                imagine you could write full fledged programs to run on a massively parallel gpu (like semi touring-complete shaders)

                in my eyes the future looks bright in the gpu department

                edit: in short; they are complicated so they don't become even more complicated, but they are simplifying slowly (in general design)
                Last edited by gens; 03-25-2014, 09:23 PM.

                Comment


                • #9
                  just for fun

                  if you like banging your head against an invisible wall
                  try to do the aforementioned 3D transforms on paper
                  (wait till you get to the conclusion that quaternions are good... that's a shock and a half)

                  don't forget to rasterize a textured triangle with anisotropic filtering (on paper ofc)


                  (i'd rather recommend raytracing, it is much simpler)
                  Last edited by gens; 03-25-2014, 09:39 PM.

                  Comment


                  • #10
                    As I would say on the slashdot...

                    Originally posted by gens View Post
                    just for fun

                    if you like banging your head against an invisible wall
                    try to do the aforementioned 3D transforms on paper
                    (wait till you get to the conclusion that quaternions are good... that's a shock and a half)

                    don't forget to rasterize a textured triangle with anisotropic filtering (on paper ofc)


                    (i'd rather recommend raytracing, it is much simpler)
                    Mod parent up!

                    Comment


                    • #11
                      I understand that a lot of computations are done on the CPU, then the results are sent to the GPU. I was talking more about stuff like this though:

                      Watching a 1080p video normally, and watching a 1080p video with "hardware acceleration".
                      I assume the first means that all decoding and graphics processing is done on the CPU (assuming a non-OGL rendering method) while the second means using the GPU for both operations. If this is true, why wouldn't the GPU be used in the first place? Since it's obviously made for tasks such as these, vs the CPU which (for the most part) is not.

                      Comment


                      • #12
                        Originally posted by Daktyl198 View Post
                        I understand that a lot of computations are done on the CPU, then the results are sent to the GPU. I was talking more about stuff like this though:

                        Watching a 1080p video normally, and watching a 1080p video with "hardware acceleration".
                        I assume the first means that all decoding and graphics processing is done on the CPU (assuming a non-OGL rendering method) while the second means using the GPU for both operations. If this is true, why wouldn't the GPU be used in the first place? Since it's obviously made for tasks such as these, vs the CPU which (for the most part) is not.
                        not every GPU can do it, and not everything related to video decoding is feasible to do on gpu. it greatly depends on the capacities of given chip. also, GPU's have very strict support for certain codecs, while cpu can handle anything decoder software can do.

                        there are dedicated SoC's in the market that can decode video on their own, mostly RealTek RTD chips, or whatever is in android devices and modern gaming consoles. those are very specific too, and making them handle any new codecs requires a firmware update or a total replacement of hardware. basically this solution is not very flexible, also certain codecs require lots of licensing (video patents).
                        Last edited by yoshi314; 03-27-2014, 07:12 AM.

                        Comment


                        • #13
                          Yep, GPU hardware doesn't handle all video formats, and people preparing an OS can't assume that every piece of GPU hardware will have full driver support from day one either, so typically OSes ship with a full SW decode/render stack then drivers plug in acceleration options for the more commonly used formats.

                          Comment


                          • #14
                            I think what he means is: "Why are GPUs so specialized?" And as far as the CPU having to set things up, isn't that what the control units on the GPU are supposed to be doing?
                            I imagine a crapload of little CPUs when I think of a GPU and I understand that it's easier to make a single powerful processing core pretend it's multiple in terms of programming complexity. But I still can't figure out why we can't have a GPU do most things that the CPU can do. Rather, it does a few things a LOT better than the CPU.

                            What I mean: Isn't the tradeoff between upgrading the hardware via software worth the performance hits? [which I think could become on par or very close to with the hardware based decoders]
                            Last edited by profoundWHALE; 03-27-2014, 07:51 PM.

                            Comment


                            • #15
                              There aren't a lot of specialized areas on a GPU these days -- texture filtering is the main one for regular graphics work, and dedicated video encode/decode processing is generally aimed at letting you perform specific operations without having to rely so much on the main GPU core *or* CPU cores, both of which draw more power. Most of a GPU these days is SIMD floating point processors, memory controllers or register files.

                              In general CPUs are optimized for single thread processing while GPUs are optimized for massively parallel processing (each element might be only 1/10th as fast as a typical CPU core but manages that with 1/200th the space and power) and stream processing (eg big delayed-write caches without logic to detect read-after-write hazards). CPUs devote a lot of logic to maintaining a simple, coherent programming model, while GPUs toss most of that out the window in exchange for much higher performance. Sports car vs. muscle car.

                              GPUs don't take over all the work normally done by a CPU (the inherently single-threaded part) because they would essentially have to become CPUs themselves in order to do that.
                              Last edited by bridgman; 03-27-2014, 08:55 PM.

                              Comment

                              Working...
                              X