Page 2 of 4 FirstFirst 1234 LastLast
Results 11 to 20 of 32

Thread: Nouveau Gallium3D Now Supports OpenGL 3.2, 3.3

  1. #11
    Join Date
    Feb 2011
    Location
    France
    Posts
    214

    Question

    Quote Originally Posted by agd5f View Post
    Testers wanted?

  2. #12
    Join Date
    Sep 2010
    Posts
    55

    Default

    Quote Originally Posted by imirkin View Post
    It's unlikely that NV50 would get GL4.0 -- that requires tesselation shaders, which NV50-class cards just don't have. The proprietary driver also only goes up to GL3.3 (for nv50), but also exposes a bunch of GL4-era extensions that are possible to implement using the hardware available. (Nouveau is definitely behind in that regard, but working on it!)
    Couldn't tessalation be implented manually using an OpenCL kernel? That's what I was thinking of. It would be slower than having it hardware accelerated, but it could still be possible using software.

  3. #13
    Join Date
    Jan 2014
    Posts
    2

    Unhappy too late

    Just when i unmerged it

  4. #14
    Join Date
    Aug 2011
    Posts
    75

    Default

    Quote Originally Posted by imirkin View Post
    It's unlikely that NV50 would get GL4.0 -- that requires tesselation shaders, which NV50-class cards just don't have. The proprietary driver also only goes up to GL3.3 (for nv50), but also exposes a bunch of GL4-era extensions that are possible to implement using the hardware available. (Nouveau is definitely behind in that regard, but working on it!)
    And will be there some new support for nv40 cards? Is GL_EXT_draw_buffers2 possible (which is needed by source engine)?

  5. #15
    Join Date
    Oct 2008
    Posts
    3,173

    Default

    Quote Originally Posted by werfu View Post
    Couldn't tessalation be implented manually using an OpenCL kernel? That's what I was thinking of. It would be slower than having it hardware accelerated, but it could still be possible using software.
    I don't think you can reasonably take the output of a vertex shader, use it as input to a CL kernel, and use that output into a fragment shader. (i think that's the pipeline, right?) At least not with any kind of reasonable speed.

    Drivers shouldn't fake hardware support when it can't be done fast. Otherwise, it's impossible for applications to tell whether or not something works well, and you end up with applications calling all sorts of fancy API calls that slow down the application to sub 1fps speeds. While if the driver correctly states that something isn't implemented, the application can use an older technique or just skip the extra calls entirely and you end up with an application running at reasonable speeds.

    If your hardware driver doesn't support a new enough API, you should just use LLVMPipe from the start, because it's likely to give you just as good of speed as a hardware driver falling back to software fallbacks is.
    Last edited by smitty3268; 01-27-2014 at 03:31 PM.

  6. #16
    Join Date
    Aug 2013
    Posts
    29

    Default

    Quote Originally Posted by pali View Post
    And will be there some new support for nv40 cards? Is GL_EXT_draw_buffers2 possible (which is needed by source engine)?
    A quick glance at the code makes it seem like it already is -- except for the bit about enabling the extension. Try changing the return value in nv30_screen.c for PIPE_CAP_INDEP_BLEND_ENABLE (and only that one, so move it out of the long list of unsupported PIPE_CAP's) to be "eng3d->oclass >= NV40_3D_CLASS". No clue if that'll actually work though -- the code just looks like it might already support it, but perhaps there are corner cases where it doesn't.

    As for predictions on future development for the nv30 driver -- it's anyone's guess. I may look at it later, but right now I'm playing around with the nv50 driver, and don't have a nv30/40 card plugged in. If you're at all interested in trying this yourself, join us at #nouveau on freenode and we'll give you some pointers. (No prior GPU/OpenGL/etc experience necessary, but programming experience is.)

  7. #17
    Join Date
    Aug 2012
    Posts
    111

    Default

    Quote Originally Posted by imirkin View Post
    I posted my patchset 2 weeks earlier...
    There it is. I hadn't realized that it had been around for a bit already. My apologies.

  8. #18
    Join Date
    Nov 2012
    Location
    France
    Posts
    593

    Default

    Quote Originally Posted by Maxim Levitsky View Post
    How its possible to write an article and not mention the fact that in essence all NV50 class cards, have feature party with binary driver now....
    No stable and enabled by default power management, no multi-card or SLI support.

  9. #19
    Join Date
    Jul 2010
    Posts
    503

    Default

    Quote Originally Posted by smitty3268 View Post
    I don't think you can reasonably take the output of a vertex shader, use it as input to a CL kernel, and use that output into a fragment shader. (i think that's the pipeline, right?) At least not with any kind of reasonable speed.

    Drivers shouldn't fake hardware support when it can't be done fast. Otherwise, it's impossible for applications to tell whether or not something works well, and you end up with applications calling all sorts of fancy API calls that slow down the application to sub 1fps speeds. While if the driver correctly states that something isn't implemented, the application can use an older technique or just skip the extra calls entirely and you end up with an application running at reasonable speeds.

    If your hardware driver doesn't support a new enough API, you should just use LLVMPipe from the start, because it's likely to give you just as good of speed as a hardware driver falling back to software fallbacks is.
    Theoretically I think it could be done abusing transform feedback.

    Pipeline: Vertex Shader -> Tessellation(Control shader -> Tessellator -> Evaluation shader) -> Geometry Shader -> Rasterization -> Fragment Shader

    Get vertex shader result into transform feedback buffer. Process buffer with OpenCL tesselation kernel (translate tessellation shaders to OpenCL will be the hard part I guess...). Feed buffer with pass-through vertex shader to finally draw the geometry.


  10. #20
    Join Date
    Apr 2013
    Posts
    221

    Default without really interest

    without reclocking/dpm and better performance this is not really important, but good work for the devs, and F*** you nvidia

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •