If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.
No announcement yet.
AMD Catalyst vs. Linux 3.7 + Mesa 9.1-devel Gallium3D Performance
Run the same tests on the same kernel. I'm betting the proprietary driver stomps heavier. You could run it on Linux 3.6.6 against the 3.7 trunk and see how less impressive the open source driver is but you know that already.
I applaud the efforts that have been made with the radeon driver. It came from nothing - to the offer good stability and a good subset of the OpenGL requirements for modern 3D gaming. But last time I tested it on my laptop with a **real** game (S.T.A.L.K.E.R. : SOC via Wine) I got something like 6 FPS (Catalyst gives more like 30 FPS). The lack of full shadows isn't too noticeable - but the lamentable framerate is...
Since I have a (very recent!!) legacy 4650M I guess most distros are gradually going to stop supporting it (my ARCH and Gentoo installs might limp on longer than the likes of Ubuntu - which xorg-server updates are the killer).
I only have an AMD GPU in my laptop because Nvidia had nothing decent when I bought it. While the Catalyst driver was very slowly getting better over the years - I'm now stuck on some stupid legacy driver (limiting xorg-server and kernel updates).
Trouble is I can't help looking at my truly ancient desktop Geforce 8800GTX with the bleeding edge 310.xx Nvidia beta driver - working like a champ and playing Black Mesa Source (via Wine) rather well!
I guess the explanation for such a constant gap between the open source driver and fglrx is one single fundamental design difference between the two drivers, that's causing such an inefficiency on kernel/mesa/xf86-video-ati.
If TTM is the culprit, why isn't there any patches already (4 years!) for addressing the buffers migration issue? AFAIK, that's what's Chris W. has done w/ SNA for the Xorg Intel driver.
I expect the performance gap is caused by a dozen or more design differences, not one. Just a few off the top of my head :
- tiling (mostly done AFAIK)
- hyper-z (started)
- shader compiler (WIP)
- threading (command submission is in separate thread on r300g, not sure about r600g)
- memory manager heuristics (this is what would probably help with buffer migration)
- adaptive load balancing
- adaptive memory reconfiguration
BTW I don't think Marek is saying "we've known this for 4 years", I think he's saying "I just looked recently and the problem with these specific applications seems to be buffer migration".
If the developers believed that one single issue was responsible for most of the performance differences then I think it's pretty safe to assume they would be all over it, but I don't think that is the case.
Some apps (typically the very slowest) may have one single issue that contributes most of the slowdown RELATIVE TO OTHER APPLICATIONS but that's not the same as saying one single issue contributes to most of the performance gap between the open source stack and the Catalyst stack.