Originally posted by log0
View Post
Announcement
Collapse
No announcement yet.
Radeon Gallium3D Still Long Shot From Catalyst
Collapse
X
-
I think it's very promising. The Xonontic benchmarks are very pleasing.
I am guessing from the benchmarks is that there is still some stuff falling back to software that is killing performance for certain things. With some optimization to applications and filling in some missing pieces in the drivers and we are golden. Once open source gets within about 70-80% of proprietary then I'd call it success.
Comment
-
Originally posted by mattst88 View PostRegrettably, this doesn't work. When you profile an apitrace replay, you find that a huge portion of the profile is simply apitrace parsing the multigigabyte trace file.
Just did a quick run with vdrift about 2min, 130MB trace. Frame rate without tracing is about 22fps, with tracing 17fps, retracing 15fps(68% of original fps). Are my results atypical?
As I see it, the slowdown would be the same for all benchmarked cards and we are interested in the relative performance only.
Comment
-
Originally posted by drag View PostI think it's very promising. The Xonontic benchmarks are very pleasing.
I am guessing from the benchmarks is that there is still some stuff falling back to software that is killing performance for certain things. With some optimization to applications and filling in some missing pieces in the drivers and we are golden. Once open source gets within about 70-80% of proprietary then I'd call it success.
After adding HiZ and doing some further chasing down of performance bottlenecks in the open source code, performance can be expected to reach perhaps 80% of the closed binary drivers. Since almost no-one needs 200 fps performance, and the difference between 160 fps and 200 fps is all but imperceptible anyway, the perfromance issue with open source drivers will essentially be solved.
Comment
-
Bridgman, given than GCN moved to hardware scheduling, I assume lacking an advanced compiler in Mesa becomes less of a bottleneck. How would you estimate the effect of that move?
E.g. do you see GCN cards getting to 80% of catalyst, where earlier can get 70% etc?
Comment
-
Yeah, I don't have any real numbers but from a pure shader compiler POV my guess is that half the gap between open source and proprietary driver might go away with GCN.
For compute the impact will probably be even greater (since graphics is naturally short-vector work while compute is naturally scalar). We're also picking up some compiler improvements at the same time by using LLVM, so it could get interesting.
The bigger question is how much of the performance delta today comes from shader compiler rather than things like HyperZ, since the impact of both of them increase with display resolution.Last edited by bridgman; 24 March 2012, 12:03 PM.Test signature
Comment
-
I've always wondered why ATI/AMD doesn't just hire an additional five developers for OSS development. I'd assume that it would take 6 months of training to get them to the point where they could produce something useful, but we'd see real results by the end of a year, and have a performant replacement for Catalyst in two years.
JB,
What's the deal with that? $750k buys a team for two years. Does the revenue from Linux-related-sales not justify the cost? (admittedly, I have no idea how much of AMDs revenue is generated via linux-related sales, no do I understand how your SD org is run). I do know that disappointed customers are far less likely to make subsequent purchases, so this is probably something that should have been done a couple years ago, when gallium was coming about.
On a slightly related note, I'm a bit disheartened to see everyone working so hard on legacy technology. I really though that we would all have 10-bit/chan monitors by now. I really thought that we would all have ray-tracing now. I really thought that 'everyone' would be able to play back a 1080p Main-Profile H264 file by now. Even if I had one of the dozen 10-bit/chan panels on the market, I doubt that I'd be able to drive the thing with X/Mesa (I could be totally wrong). I don't want to diminish the efforts of everyone working radeon, but when the next CG generation or innovation becomes mainstream, we're going to be back at the starting line again.
What a strange world we live in.
F
Comment
Comment