Originally posted by t.s.
View Post
For one, it only uses one of the two GPUs on the HD5970, a card that has two massive GPUs that are more or less separate (separate enough that they need special code in the drivers to access them both at once, and AFAIK the renderer has to be multi-threaded also).
So that cuts its utilization down to 50% immediately because it's only using one of the two GPUs.
Then, of that one GPU that's being used, it uses between 10% and 40% of it, depending on what application you're running. These are estimates based on the comparison between r600g and Catalyst, and we're assuming that Catalyst is near-optimal (Catalyst is probably getting about 85% to 95% of each GPU's potential on most workloads; due to necessary overhead it's never going to use 100.0% on real apps. It'll be lower if your system RAM or CPU or HDD are a bottleneck for the specific application).
It's this low utilization that allows the Ivy Bridge GPU to look so good. I figure the current Intel drivers are using -- worst case -- about 50% of the Ivy Bridge GPU at a minimum. And it only gets better if your cooling solution can handle overclocking, or if you run some particular application that hits only well-optimized paths in the driver.
I just didn't want you thinking that the Ivy Bridge hardware is so fast that it literally has more power than AMD's hardware. It doesn't, by a long shot, even if you're comparing it against their Fusion APUs. But when you use what you've got, you get much better results than when you wantonly waste it.
It's been a battle ever since AMD started their open source initiative: how do we open up the bottlenecks in the driver so that, ultimately, the only component that gets bottlenecked is the GPU (except in cases of extremely high FPS where the CPU gets bottlenecked always)? When you've attained that, your driver is "ready" for prime time. AMD's drivers are a long, long, LONG way from that right now.
Comment