Announcement

Collapse
No announcement yet.

AMD Radeon HD 6000 Series Open-Source Driver Becomes More Competitive

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Vegemeister
    replied
    Where's the 2D?

    While there are some Linux gamers, most of us are more concerned about scrolling PDF.js pages without dropping frames in maximized windows and driving 2, 3, or more monitors than we are about demanding 3D OpenGL games. It would be nice to see the cairo-perf-trace benchmarks become part of all the GPU and graphics stack reviews.

    It doesn't matter how well Quake 3 runs if I can't get vsynced compositing on all screens.

    Leave a comment:


  • agd5f
    replied
    Originally posted by Ancurio View Post
    I assume most of the missing 50% performance in radeon is not due to "some secret magic performance unlocking code" that catalyst has,
    but the accumulated effect of dozens of small optimizations that would make radeons code unclean if they were applied. Is that a fair assumption?
    Correct. Not only are there are number of 3D driver optimizations that could be done, there are also a lot of memory management optimizations that could be done to improve performance.

    Leave a comment:


  • Ancurio
    replied
    Originally posted by bridgman View Post
    I think it's more likely that the hand-tweaking optimizations won't happen and the open source driver will stay clean.

    That's what we've been assuming anyways...
    I assume most of the missing 50% performance in radeon is not due to "some secret magic performance unlocking code" that catalyst has,
    but the accumulated effect of dozens of small optimizations that would make radeons code unclean if they were applied. Is that a fair assumption?

    Leave a comment:


  • Ericg
    replied
    Originally posted by bridgman View Post
    I think it's more likely that the hand-tweaking optimizations won't happen and the open source driver will stay clean.

    That's what we've been assuming anyways...
    Is the documentation / knowledge out there so if a dev WANTED to start hand-tuning they could? I'm all for the driver staying clean, in my book understandable and maintainable code is better than handtuning the crap and making a mess out of code for that extra few percentage points of performance. I'm just making sure that if someone really really REALLY wanted to, the information was out there and then Mesa / the kernel devs could decide which path (performance or cleanliness) they wanted to walk.

    Leave a comment:


  • Krejzi
    replied
    Why is HD6450 performance so much different (terrible) than the others? I happen to have a laptop with that card (hybrid setup) but in all cases, Intel card was WAY faster. It's a different story on windows though.

    Leave a comment:


  • curaga
    replied
    Nobody else noticed how 6950 beat catalyst in Xonotic Ultra? 17% faster. And that's without SB.

    Leave a comment:


  • bridgman
    replied
    Originally posted by Ericg View Post
    Is Radeon then going to become a mess of if's and IFDEF's, Bridgman? All that hand-tuning to get every little ounce of performance out of every card or are the devs thinking that its best to keep the code as clean as possible and just go for the 'middle of the road, good for most but not perfect for all' approach?
    I think it's more likely that the hand-tweaking optimizations won't happen and the open source driver will stay clean.

    That's what we've been assuming anyways...

    Leave a comment:


  • Calinou
    replied
    Originally posted by krasnoglaz View Post
    I don't understand why test target for drivers are decade old shaderless games or opensource relatively light games like Xonotic. Why not Team Fortress 2 and Dota 2?
    No. Just no. Xonotic on Ultra is actually as demanding as TF2, if not more. You could even play with the Ultimate setting and Antialiasing if you wanted.
    Last edited by Calinou; 20 August 2013, 03:17 PM.

    Leave a comment:


  • Ericg
    replied
    Originally posted by bridgman View Post
    It's not so much about focusing on mid-range GPUs, it's just that the mid-range GPUs have the least need for hand-tweaking optimization.

    Low end parts tend to run into memory bandwidth and "tiny shader core" bottlenecks (requiring a lot of complex heuristics), high end parts are so fast that they often get CPU limited before they get GPU limited (requiring a lot of tuning to reduce CPU overhead in the driver), while midrange parts tend to be more balanced and less likely to get badly bottlenecked in a single area.
    Is Radeon then going to become a mess of if's and IFDEF's, Bridgman? All that hand-tuning to get every little ounce of performance out of every card or are the devs thinking that its best to keep the code as clean as possible and just go for the 'middle of the road, good for most but not perfect for all' approach?

    Leave a comment:


  • bridgman
    replied
    Originally posted by schmidtbag View Post
    Anyways, it's pretty exciting to see these test results. I find it interesting how in terms of GPUserformance, it forms a sort of sine wave, where the very low end cards and the very high-end cards perform the worst. I get the impression the devs focus the most on the mainstream GPUs, since the low-end GPUs aren't good for gaming and if you want your money's worth for the high-end parts, you're better off using catalyst.
    It's not so much about focusing on mid-range GPUs, it's just that the mid-range GPUs have the least need for hand-tweaking optimization.

    Low end parts tend to run into memory bandwidth and "tiny shader core" bottlenecks (requiring a lot of complex heuristics), high end parts are so fast that they often get CPU limited before they get GPU limited (requiring a lot of tuning to reduce CPU overhead in the driver), while midrange parts tend to be more balanced and less likely to get badly bottlenecked in a single area.

    Leave a comment:

Working...
X