EXA testing was used. Sorry if that wasn't clear in the article.
It would be intresting to test XAA vs EXA also. I have noticed at least when using some windows applications in wine EXA has horrible 2D performance. EXA takes 100% CPU usage at 2.1GHz with EXA while XAA takes only around 10 % when cpu is running at 796MHz. Xserver (Xorg 1.5.2) takes nearly all cpu time when using EXA.
To me it seems like either wine is too aggresively optimized for XAA or EXA has some hiden performance penalty for the poor drawing code of wine. This is surprising because everything else has visible speed up when I turn EXA on in my system.
I'm using R200 series mobility radeon.
Maybe EXA is not good for r200, who knows. I experience the same performance drop as you had, plus EXA has problems with Mesa also and native apps. Try ppracer and turn on fps counter and you will see on the end of each level 0(zero) fps.
All this is good with XAA.
17 out of 28?? All that time was spent on an article to reach that determination? Who exactly are the intended beneficiaries of this info when most of us can't even get 8.12 installed and working properly under linux? Just have a look at your own forums.
This article would have hit the spot if the majority of us have the option of using either proprietary or open source drivers with equal ease. Until we get there though a much better article would have been a comprehensive guide to getting 8.12 working with a particular kernel for a particular distro. Don't get me wrong I appreciate such articles, but at this point in time the 2D issue is a non-sequitor.
Occasionally you'll see some statements like "most stable driver ever" or "never had a problem with it actually", but it just gets overwhelmed by people who write about 5 or more posts about their problem, and thus you get the impression that fglrx doesn't work on most systems.
And honestly, if you've got an X600 with a PCI-AGP bridge (uhm... if these existed for the X600 familiy already, but you get the idea), it's quite probable that these chips aren't tested that well (apart from the fact that it's just too much maintainance work for most vendors).
Another point is that many people first try to generate rpm or deb files, e.g. as they keep the system cleaner... I, for example, never really could get the driver running with this method. On the other hand, since I'm using the automated installer by fglrx, installation works faster and more reliable than otherwise (even livna repos gave me problems at some point).
Xorg's log file should confirm that scaling is indeed enabled once you specify the DynamicClocks option in the Device section. It does not do that in my case and I have absolutely no idea what to do next :-(
Would these gtk test programs show meaningful/comparable numbers when ported to Windows? Is there any way to see if these drivers in Linux are really squeezing out all the performance the hardware has to offer, or if the Windows drivers still have some unexplored tricks to leverage?