The Warsow benchmark alone is worth a party. Thanks, guys. You are the best.
Announcement
Collapse
No announcement yet.
ATI's Gallium3D Driver Is Still Playing Catch-Up
Collapse
X
-
What about adding general apps instead of so many games? Blender, Wings, Meshlab... pick some big 3D models, and check how many FPS the app can handle while displaying those, or pick a FPS limit like 25 and check how many polygons until the FPS counter goes below that. Or even if the drivers support all the features the apps request, or have to use software fallbacks (radeon and stipples anyone?).
Some of those apps even include benchmarks already, and they can also be used to do rendering tests, not just OpenGL. Generic tests (not this article) could include rendering workloads with Aqsis, POVRay...
Yeah, some of us don't focus on games.
Comment
-
-
Performance wise, one of the most appealing traits of Gallium3D was (Or is) the use of LLVM as a Shader optimizer.But it isn't, as far as i know, something that is being used as for now.
I did read something about implementing it, its shortcomings and things that had to be worked around to use it. So i wanted to ask, and everyone who cares about performance ( I certainly do) should want to know, how is that being solved for R300g?
I see great potential for it, Clang (Based on LLVM) compiles short sized programs (Larger than a shader, of course) in 5 seconds with speed equal or greater than a more recent GCC. Feasible i guess, for shaders and the likes. And I think LLVM is capable of doing better optimizations than ATI FGLRX driver, wouldn't it?
So my question is, How Implementing LLVM is being considered. and how could it affect performance? Bridgman and Marek, care to elaborate?
Comment
-
Originally posted by WillyThePimp View PostPerformance wise, one of the most appealing traits of Gallium3D was (Or is) the use of LLVM as a Shader optimizer.But it isn't, as far as i know, something that is being used as for now.
I did read something about implementing it, its shortcomings and things that had to be worked around to use it. So i wanted to ask, and everyone who cares about performance ( I certainly do) should want to know, how is that being solved for R300g?
I see great potential for it, Clang (Based on LLVM) compiles short sized programs (Larger than a shader, of course) in 5 seconds with speed equal or greater than a more recent GCC. Feasible i guess, for shaders and the likes. And I think LLVM is capable of doing better optimizations than ATI FGLRX driver, wouldn't it?
So my question is, How Implementing LLVM is being considered. and how could it affect performance? Bridgman and Marek, care to elaborate?
Comment
-
Originally posted by marek View PostNo, r300g doesn't support stippling and it behaves like stippling is always disabled. Do you need it? There are no software fallbacks (and we don't plan any) so if a feature works, it runs at full speed.
Please, consider games and a handful of minor apps aren't what drivers should only work with, try "serious work" apps too like 3D apps, CAD and similar things. My impression (or should I say experience?) is that games go just for candy, and work apps are the ones that really show the quality of drivers and cards, as they cover all fields (Blender has GLSL, eg), and can push a computer to the limit (they will not whine if your model has 1M polys... your computer and card will first). They also do other things like select operations that hardly or never happen in games. Etc, etc...
If you don't know how to use those programs, ask their communities, we will gladly help get the drivers at usable level. Thanks for the work up to now, in any case.
Comment
-
Originally posted by WillyThePimp View PostSo my question is, How Implementing LLVM is being considered. and how could it affect performance? Bridgman and Marek, care to elaborate?
As MostAwesomeDude said, the existing shader compiler in the 300 and 300g driver seems pretty good - might not be as good as the one in fglrx but my guess is that it's pretty close.
Right now there are no signs that the shader compiler is the bottleneck. The performance graphs indicate that the driver is CPU limited quite a bit of the time, and as airlied said there are also some GPU optimizations still to be done.
LLVM may turn out to be more useful for single-instruction architectures like the Intel and NVidia GPUs, not sure.Test signature
Comment
-
Originally posted by bugmenot View PostYes, it's needed by programs like Blender, that uses stippled lines to show different types of things in the 3D view. I know there are software fallbacks in r300classic, and when disabled, the interface is unable to provide full information to the user, it really matters. And the same happens in other CAD programs.
Please, consider games and a handful of minor apps aren't what drivers should only work with, try "serious work" apps too like 3D apps, CAD and similar things. My impression (or should I say experience?) is that games go just for candy, and work apps are the ones that really show the quality of drivers and cards, as they cover all fields (Blender has GLSL, eg), and can push a computer to the limit (they will not whine if your model has 1M polys... your computer and card will first). They also do other things like select operations that hardly or never happen in games. Etc, etc...
If you don't know how to use those programs, ask their communities, we will gladly help get the drivers at usable level. Thanks for the work up to now, in any case.
The reason that we avoided stipples is a combination of not many apps using it and the fact that we don't have it working in HW yet. I'll put more effort towards it and see if we can get something working soon.
As far as large vert counts go, we do just fine on that front; if you've got a real-world problem there, let us know.
Comment
Comment