Originally posted by agd5f
View Post
Announcement
Collapse
No announcement yet.
Nouveau Gallium3D Now Supports OpenGL 3.2, 3.3
Collapse
X
-
Originally posted by imirkin View PostIt's unlikely that NV50 would get GL4.0 -- that requires tesselation shaders, which NV50-class cards just don't have. The proprietary driver also only goes up to GL3.3 (for nv50), but also exposes a bunch of GL4-era extensions that are possible to implement using the hardware available. (Nouveau is definitely behind in that regard, but working on it!)
Comment
-
Originally posted by imirkin View PostIt's unlikely that NV50 would get GL4.0 -- that requires tesselation shaders, which NV50-class cards just don't have. The proprietary driver also only goes up to GL3.3 (for nv50), but also exposes a bunch of GL4-era extensions that are possible to implement using the hardware available. (Nouveau is definitely behind in that regard, but working on it!)
Comment
-
Originally posted by werfu View PostCouldn't tessalation be implented manually using an OpenCL kernel? That's what I was thinking of. It would be slower than having it hardware accelerated, but it could still be possible using software.
Drivers shouldn't fake hardware support when it can't be done fast. Otherwise, it's impossible for applications to tell whether or not something works well, and you end up with applications calling all sorts of fancy API calls that slow down the application to sub 1fps speeds. While if the driver correctly states that something isn't implemented, the application can use an older technique or just skip the extra calls entirely and you end up with an application running at reasonable speeds.
If your hardware driver doesn't support a new enough API, you should just use LLVMPipe from the start, because it's likely to give you just as good of speed as a hardware driver falling back to software fallbacks is.Last edited by smitty3268; 27 January 2014, 04:31 PM.
Comment
-
Originally posted by pali View PostAnd will be there some new support for nv40 cards? Is GL_EXT_draw_buffers2 possible (which is needed by source engine)?
As for predictions on future development for the nv30 driver -- it's anyone's guess. I may look at it later, but right now I'm playing around with the nv50 driver, and don't have a nv30/40 card plugged in. If you're at all interested in trying this yourself, join us at #nouveau on freenode and we'll give you some pointers. (No prior GPU/OpenGL/etc experience necessary, but programming experience is.)
Comment
-
-
Originally posted by smitty3268 View PostI don't think you can reasonably take the output of a vertex shader, use it as input to a CL kernel, and use that output into a fragment shader. (i think that's the pipeline, right?) At least not with any kind of reasonable speed.
Drivers shouldn't fake hardware support when it can't be done fast. Otherwise, it's impossible for applications to tell whether or not something works well, and you end up with applications calling all sorts of fancy API calls that slow down the application to sub 1fps speeds. While if the driver correctly states that something isn't implemented, the application can use an older technique or just skip the extra calls entirely and you end up with an application running at reasonable speeds.
If your hardware driver doesn't support a new enough API, you should just use LLVMPipe from the start, because it's likely to give you just as good of speed as a hardware driver falling back to software fallbacks is.
Pipeline: Vertex Shader -> Tessellation(Control shader -> Tessellator -> Evaluation shader) -> Geometry Shader -> Rasterization -> Fragment Shader
Get vertex shader result into transform feedback buffer. Process buffer with OpenCL tesselation kernel (translate tessellation shaders to OpenCL will be the hard part I guess...). Feed buffer with pass-through vertex shader to finally draw the geometry.
Comment
Comment