The problem is that geometry shaders come after vertex shaders in the pipeline, so you can't easily "preprocess the geometry shader work" and pass the rest to the GPU.
That was my reason for suggesting that GS be exposed by default when doing vertex processing on the CPU - that would give a "somewhat accelerated" compromise which might both perform OK and be easy to implement & use.
Gallium3D Gets New Geometry Shader Support
Collapse
X
-
Originally posted by bridgman View PostYep. In general you can shift the point where processing passes from CPU to GPU (albeit with a performance penalty) but going back and forth between CPU and GPU is almost always a Bad Thing.
PS: Sorry... pipeline... *ducks and runs*Last edited by V!NCENT; 30 December 2009, 06:35 PM.
Leave a comment:
-
-
Yep. In general you can shift the point where processing passes from CPU to GPU (albeit with a performance penalty) but going back and forth between CPU and GPU is almost always a Bad Thing.
CPUs are fast when working on data in system memory; GPUs are fast when working on data in video memory. Asking the CPU to work on something in video memory results in truly awful delays, 10x-50x slower than you would expect from just doing the work on the CPU.
Mixing GPU and CPU processing is a bit less painful on IGPs with shared memory because (a) CPU access to "video memory" is faster and (b) GPU access to "video memory" is slower relative to GPUs with dedicated video memory, but this doesn not generalize to discrete GPUs at all.
It is certainly possible to write a driver which could do some back-and-forth processing efficiently, but it would require that the entire stack be designed up-front to deal with those cases, and would litter complexity all through the stack. In a proprietary driver this is sometimes possible, since you only have to support a single vendor and have access to future hardware plans, but for an open source driver this seems impractical.
So... forcing geometry shading on by default when doing vertex processing on the CPU ("SW TCL") could work, assuming the memory manager could be directed to keep vertex textures in system memory, but anything else is probably a non-starter.
The same applies for video processing by the way - doing the front part of the decode stack on CPU and the rest on GPU works well, but anything else tends to be very slow.Last edited by bridgman; 30 December 2009, 11:51 AM.
Leave a comment:
-
-
Originally posted by V!NCENT View PostSo can everything in Mesa/Gallium3D be mixed, softare and hardware acceleration at the same time?
In practice: no, not if you need fast 3D.
Doing a small step in software will always incur a performance penalty, as all involved textures have to be moved to main memory, the rendering step has to be performed by the CPU, then everything has to be moved back. The GPU cannot perform any more work on said textures and will probably be idle for the duration.
Now remember that a GPU has a pretty long pipeline and that you'll usually have to free the pipeline before you can move a render target to CPU space and you'll see that it's pretty much infeasible for many 3d operations. Doing a whole bunch of stuff in software can work, tightly interleaving many soft- and hardware operations may easily end up slower than full software rendering.
geometry or vertex shaders may work, since they're pretty early in the pipeline. Every drawing command starts at the CPU (application), is processed a little in the drivers (still CPU) and is then passed on to the GPU. The preprocessing on the driver side can do some additional steps before passing it on to the GPU without incurring additional copies.
(note that there are exceptions, i.e. vertex shaders with texture lookups when the texture was written to before)
Leave a comment:
-
-
Originally posted by smitty3268 View Post2. AFAIK, the original decision to add support to all the drivers was reversed, because some of the other developers didn't want to advertise support for a feature that would be so slow on their cards since it would have to use a software fallback.
Leave a comment:
-
-
1. The whole reason they were talking about adding geometry shader support to all drivers was that the software support for them was already done. If the hardware doesn't support it, or no one had written the hardware support into the drivers yet, it could automatically fall back to using a vertex shader + shared routines in the draw module.
2. AFAIK, the original decision to add support to all the drivers was reversed, because some of the other developers didn't want to advertise support for a feature that would be so slow on their cards since it would have to use a software fallback.
Leave a comment:
-
-
Well, even the latest graphics hardware has fixed-function dedicated parts, some of them are:
- rasterizer (comes before the pixel shader)
- blender and output merger (comes after the pixel shader)
- tessellator (between the hull and domain shaders)
- texture units
The first three are not accessible in OpenCL. Also, from my experience, hardware interfaces appear to be designed tightly around major 3D and compute APIs. You can't schedule the shader cores directly, nor implement any other kind of shader the hardware wasn't designed for.
~ MarekLast edited by marek; 29 December 2009, 11:37 AM.
Leave a comment:
-
-
Originally posted by Eosie View Post
Originally posted by Remco View PostIf it gets implemented it won't be a bad thing. I think Eosie was just concerned that nobody would care enough, leaving you with a broken driver.
At any rate, can't every card which supports OpenCL also support any new kind of shader that Microsoft can come up with? I'm not completely sure, but isn't a modern graphics card just a ridiculously parallel pipelined processor without dedicated parts, making OpenGL and OpenCL just abstraction layers?
For cards that don't support OpenCL, I think it won't be a whole lot useful to implement geometry shaders. They won't be fast enough to run it with an acceptable framerate anyway. The same goes for any shader on cards that don't support GLSL. That will just kill the performance. That's why it could be that nobody cares about implementing it.
Note that many IGPs don't run vertex shaders on hardware, but they still manage to maintain acceptable performance for simple tasks. It's not too much of a stretch that geometry shaders might also perform adequately, given that the relevant hardware on the GPUs isn't terribly fast either.
In any case, something is better than nothing. For one, I'd prefer Unigine Tropics to run at 1fps than not run at all. Small steps at a time!
Leave a comment:
-
-
Originally posted by BlackStar View PostYou most certainly did say "must" in your post:
I simply cannot see how a software fallback for geometry shaders could be a bad thing. As far as I can tell, this code can be shared between all drivers and the effort, non-trivial as it might be, will certainly help the OpenGL stack move forward as a whole (more so than, say, implementing geometry shaders for R600+).
At any rate, can't every card which supports OpenCL also support any new kind of shader that Microsoft can come up with? I'm not completely sure, but isn't a modern graphics card just a ridiculously parallel pipelined processor without dedicated parts, making OpenGL and OpenCL just abstraction layers?
For cards that don't support OpenCL, I think it won't be a whole lot useful to implement geometry shaders. They won't be fast enough to run it with an acceptable framerate anyway. The same goes for any shader on cards that don't support GLSL. That will just kill the performance. That's why it could be that nobody cares about implementing it.
Leave a comment:
-
-
Originally posted by BlackStar View PostDo you have a link for the developer discussion on this topic?
~ Marek
Leave a comment:
-
Leave a comment: