Page 2 of 2 FirstFirst 12
Results 11 to 15 of 15

Thread: The Power Consumption & Efficiency Of Open-Source GPU Drivers

  1. #11
    Join Date
    Nov 2012
    Location
    France
    Posts
    625

    Default

    Limiting to your refresh rate more or less (maybe 61 FPS with 60 Hz screen to reduce tearing) is the best idea, it's as smooth and saves huge amounts of power (which in turns saves noise, temperature, hardware usage…). If only game developers set that as a default…

    Most “progamers” will say it sucks, but at least your graphics card doesn't burn for nothing.

  2. #12
    Join Date
    Dec 2010
    Location
    MA, USA
    Posts
    1,485

    Default

    Quote Originally Posted by Calinou View Post
    Limiting to your refresh rate more or less (maybe 61 FPS with 60 Hz screen to reduce tearing) is the best idea, it's as smooth and saves huge amounts of power (which in turns saves noise, temperature, hardware usage…). If only game developers set that as a default…

    Most “progamers” will say it sucks, but at least your graphics card doesn't burn for nothing.
    As others have pointed out, vsync can have detrimental effects on input latency. Also, there was a time when it actually had a serious negative impact on performance, though I haven't seen a game where that happens in a very long time and was only noticeable on low-end GPUs.

    As for GPUs burning up, that really only applies to a small handful of models that were poorly designed, such as the GTX 480. I'm not saying that GPU was crap, but it was a relatively lazy design and it's TDP shows it. I don't feel so great about the R9 290X either but at least that GPU won't kill itself.

  3. #13
    Join Date
    May 2012
    Posts
    677

    Default

    Quote Originally Posted by schmidtbag View Post
    As others have pointed out, vsync can have detrimental effects on input latency. Also, there was a time when it actually had a serious negative impact on performance, though I haven't seen a game where that happens in a very long time and was only noticeable on low-end GPUs.
    as others have pointed out something they read on teh internets

    it depends on the engine
    on the main loop to be more precise

    drawing takes a finite amount of time
    time that can be "predicted" (more or less)
    so if the engine adapts its timings it can get pretty low, even with leaving some head room
    (problem being ofc that if a deadline is missed there will be a frame dropped)
    more on loops http://entropyinteractive.com/2011/0...the-game-loop/

    there is also latency caused by the Xorg server (and a little from the whole input stack in the kernel)
    (thou i think this was reduced enough a while ago)

    then there is what double and triple buffering brings
    and so on and so forth

    anyway, i play cs 1.6 and xonotic with vsync
    would probably help me to turn it off
    counter argument being the awful tearing when turning over 90° in a fraction of a second
    but hey, at least it's not hellishly hot in my room

    PS source engine, dota2 version to be precise, drops to half the sync rate if too many frames are dropped
    this also reduces jitter, and jitter is bad for humans (and may or may not be more apparent without vsync)
    Last edited by gens; 07-27-2014 at 10:33 AM.

  4. #14
    Join Date
    Jan 2013
    Posts
    1,090

    Default

    Michael, I beg you, please, next when you test closed source drivers, test the very same cards and very same benchmarks! Please!

  5. #15

    Default

    Quote Originally Posted by brosis View Post
    Michael, I beg you, please, next when you test closed source drivers, test the very same cards and very same benchmarks! Please!
    I always do, when explicitly running an open vs. closed in a single article, as is the case next week.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •