Announcement

Collapse
No announcement yet.

BioShock Infinite Runs Much Faster For RadeonSI On Mesa Git: ~40%

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    Originally posted by bridgman View Post

    OK, that's fair, but then we're not really talking about power efficiency at that point.

    I don't know what the proper term is but it's not exactly power efficiency
    Nope, not really in regards to those tests.

    Maybe workload efficiency but ultimately something like that boils down to the driver. It's related to: https://en.wikipedia.org/wiki/CPU_power_dissipation

    More to the point of this thread though, something is definitely not being efficiently handled in regards to Bioshock: Infinite. The question still remains, are the GPU's using as much power as they would be if they weren't somehow capped? If they are using proportionately less energy to generate these low framerates then it's not as big of an issue as it could be. If however the cards are working their hardest to produce these results, that is very bad indeed. I'm inclined to think it's the former but I don't know. It's odd enough to me that there is some weird 60-80fps bottleneck at all.

    Comment


    • #62
      Originally posted by Xen0sys View Post
      The question still remains, are the GPU's using as much power as they would be if they weren't somehow capped? If they are using proportionately less energy to generate these low framerates then it's not as big of an issue as it could be. If however the cards are working their hardest to produce these results, that is very bad indeed. I'm inclined to think it's the former but I don't know.
      In general CPU power draw tends to be (small amount + % utilization times large amount) ie it tracks almost proportionally. Michael's test results seem to support this view - they don't report on utilization but power consumption tends to track fps which suggests that utilization rather than some other aspect of "efficiency" is the main difference between the drivers.

      Originally posted by Xen0sys View Post
      t's odd enough to me that there is some weird 60-80fps bottleneck at all.
      Utilization might be low because of pure driver CPU usage, but it's more often synchronization-related (app+driver is waiting for work item A to finish on the GPU before queueing work item B).

      In the latter case either the app needs to change or the driver needs to lie to the app, say "sure that work is done, you can trust me", and then implement some invisible synchronization further down the stack to avoid chaos based on explicit knowledge of how the app is coded.

      Problem is that there is usually no standard for that behaviour, so every driver ends up doing it differently. The more of these scary things you do the better your performance gets but the more spectacular the bugs become when moving between drivers.

      Alternatively, apps could move to a newer API (Vulkan, DX12 etc..) where this kind of behaviour is part of the standard so standards-compliant drivers can run really fast. But that's just crazy talk, right ?
      Last edited by bridgman; 16 August 2016, 08:35 PM.
      Test signature

      Comment


      • #63
        Originally posted by bridgman View Post
        Alternatively, apps could move to a newer API (Vulkan, DX12 etc..) where this kind of behaviour is part of the standard so standards-compliant drivers can run really fast. But that's just crazy talk, right ?
        Hopefully in the next 1-2 years that starts taking hold universally.

        Comment

        Working...
        X