Page 1 of 4 123 ... LastLast
Results 1 to 10 of 31

Thread: Why are graphics as complicated as they are?

  1. #1
    Join Date
    Jul 2013
    Posts
    398

    Default Why are graphics as complicated as they are?

    I'm not talking about having in-kernel code and user-space code.

    Why do we have the CPU process SO MUCH of our graphics? Even when using OpenGL, a graphics API meant to talk to the GFX card, half the time it just sends that to the CPU unless specified to use "hardware acceleration". Can anybody tell me why we don't use the GFX card for ALL graphics rendering and leave the CPU out of it? From what I've been told, GFX cards have low-power states that they could use to render desktops and stuff.

    This is a serious question that I just thought of at 12:30am, but feel free to rip it to shreds in the interest of educating me.

  2. #2
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,514

    Default

    Quote Originally Posted by Daktyl198 View Post
    Even when using OpenGL, a graphics API meant to talk to the GFX card, half the time it just sends that to the CPU unless specified to use "hardware acceleration".
    AFAIK this is simply not the case. There is a fair amount of CPU processing required to translate from HW-independent OpenGL API abstractions to a specific piece of GPU hardware, but normally all of the actual graphics processing/rendering requested via OpenGL *is* done on the GPU.

    The bigger issue here is that as graphics hardware evolves the optimal graphics API changes as well. That's one of the main reasons you see new APIs and new versions of existing APIs being introduced -- so that an application making full use of the new API mechanisms can operate with lower overhead on modern hardware.

    (the other main reason is to allow more complex and useful processing activities to be offloaded to the GPU, of course)

  3. #3
    Join Date
    Jul 2013
    Posts
    398

    Default

    Quote Originally Posted by bridgman View Post
    AFAIK this is simply not the case. There is a fair amount of CPU processing required to translate from HW-independent OpenGL API abstractions to a specific piece of GPU hardware
    I don't get this either... why can't we map the API calls to the specific hardware at boot or through a function (for hot-swapping/other GPU changes) and store the result, that way we don't spend precious CPU cycles translating all the time?

    Quote Originally Posted by bridgman View Post
    (the other main reason is to allow more complex and useful processing activities to be offloaded to the GPU, of course)
    90% of which is done when the Desktop is not in view (Games, full-screen videos, etc) or don't take a lot of processing power (hardware accelerated web browser/applications), so the argument still stands that the GPU could be used instead of the CPU

  4. #4
    Join Date
    Feb 2008
    Location
    Linuxland
    Posts
    5,188

    Default

    Quote Originally Posted by Daktyl198 View Post
    I don't get this either... why can't we map the API calls to the specific hardware at boot or through a function (for hot-swapping/other GPU changes) and store the result, that way we don't spend precious CPU cycles translating all the time?
    Because the hw does not implement them, it's much lower level. Last time hw implemented them was Glide, and you know what happened to it.

  5. #5
    Join Date
    Nov 2009
    Posts
    15

    Default

    But now, we're returning back to the days of Glide. AMD's Mantle is low level programming and even the next DirectX 12 will introduce it.

  6. #6
    Join Date
    Sep 2013
    Posts
    125

    Default

    Quote Originally Posted by Mereo View Post
    But now, we're returning back to the days of Glide. AMD's Mantle is low level programming and even the next DirectX 12 will introduce it.
    DX will never be as low level as mantle if it wants to effectively and sanely support a more diverse set of hardware. Like curaga said, we know what happened to glide. It was designed for a specific hardware architecture and simply wasn't flexible enough to serve as a generic graphics api in the long run. The way OpenGL is moving (and has already moved) to tackle problems with overhead seems more sustainable and will certainly be good enough to make mantle pretty much irrelevant.

    I wonder what happens to your mantle-based games in the future when gcn is dead and buried? Oh wait, I don't, because no developer hates money enough to make their games exclusively support mantle.

  7. #7
    Join Date
    Apr 2010
    Posts
    69

    Default Why do anything?

    I think the answer to your question is pretty much: "because people want the absolute best graphics possible". You don't have to use a GPU, just like you don't have to use the CPU. You get the best results when everything is maxed out, though.

    So, if you want top-end graphics you'll have to deal with complexity. If you want to write something that looks like Pong... well you can do that pretty easily with modern computer hardware.

    From a technical perspective, if your scene is pretty static, you can push most of the rendering to the GPU. The CPU needs to set stuff up correctly but then the GPU will just draw it. The problems start when your scene starts moving - then you'll need the CPU to do some additional smarts to squeeze that extra 50% from your GPU.

  8. #8
    Join Date
    May 2012
    Posts
    550

    Default

    Quote Originally Posted by Daktyl198 View Post
    I'm not talking about having in-kernel code and user-space code.

    Why do we have the CPU process SO MUCH of our graphics?
    (disclaimer: i'm not an expert)
    i'm guessing here
    you are thinking about absolute efficiency

    thing is gpu's are nowadays just a bunch of compute units orchestrated by a control unit (theres more ofc)
    compute units are simple things
    with that kind of design gpu's are not limited to doing just one specific kind of "rendering"
    (rendering 3D is just a bunch of mathematical transforms with some logic in the mix)

    so a gpu driver is basically a state machine that says to the hardware (firmware in this case) what should be done


    also about the cpu part in it
    even in a case of something simple like a desktop or a window with some buttons or something you still need the logic behind it

    like when you move a window
    you have to calculate where it is moved
    check, based on rules, things like if you move it to an edge do you flip to the next virtual desktop (etc)

    but that is simple
    in complex graphics for example you don't want the gpu to draw the whole huge world
    so you cull everything not seen
    you do it on the cpu because you have to know in advance what you will be rendering (to not send textures when not needed, vertices, etc.)
    (this is also required for a desktop if you want lower gpu memory usage)


    still i like the idea of directly controlling the gpu
    i read something that in the future (or maybe even now in new opengl) you will be able to get a pointer in gpu memory

    also i think gpus are going in the direction of having a dedicated cpu (like ARM or something) on them that would control it
    imagine you could write full fledged programs to run on a massively parallel gpu (like semi touring-complete shaders)

    in my eyes the future looks bright in the gpu department

    edit: in short; they are complicated so they don't become even more complicated, but they are simplifying slowly (in general design)
    Last edited by gens; 03-25-2014 at 09:23 PM.

  9. #9
    Join Date
    May 2012
    Posts
    550

    Default

    just for fun

    if you like banging your head against an invisible wall
    try to do the aforementioned 3D transforms on paper
    (wait till you get to the conclusion that quaternions are good... that's a shock and a half)

    don't forget to rasterize a textured triangle with anisotropic filtering (on paper ofc)


    (i'd rather recommend raytracing, it is much simpler)
    Last edited by gens; 03-25-2014 at 09:39 PM.

  10. #10
    Join Date
    Feb 2008
    Location
    Israel, Tel-Avil
    Posts
    73

    Default As I would say on the slashdot...

    Quote Originally Posted by gens View Post
    just for fun

    if you like banging your head against an invisible wall
    try to do the aforementioned 3D transforms on paper
    (wait till you get to the conclusion that quaternions are good... that's a shock and a half)

    don't forget to rasterize a textured triangle with anisotropic filtering (on paper ofc)


    (i'd rather recommend raytracing, it is much simpler)
    Mod parent up!

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •