Announcement

Collapse
No announcement yet.

Nouveau Gallium3D Now Supports OpenGL 3.2, 3.3

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by pali View Post
    And will be there some new support for nv40 cards? Is GL_EXT_draw_buffers2 possible (which is needed by source engine)?
    A quick glance at the code makes it seem like it already is -- except for the bit about enabling the extension. Try changing the return value in nv30_screen.c for PIPE_CAP_INDEP_BLEND_ENABLE (and only that one, so move it out of the long list of unsupported PIPE_CAP's) to be "eng3d->oclass >= NV40_3D_CLASS". No clue if that'll actually work though -- the code just looks like it might already support it, but perhaps there are corner cases where it doesn't.

    As for predictions on future development for the nv30 driver -- it's anyone's guess. I may look at it later, but right now I'm playing around with the nv50 driver, and don't have a nv30/40 card plugged in. If you're at all interested in trying this yourself, join us at #nouveau on freenode and we'll give you some pointers. (No prior GPU/OpenGL/etc experience necessary, but programming experience is.)

    Comment


    • #17
      Originally posted by imirkin View Post
      I posted my patchset 2 weeks earlier...
      There it is. I hadn't realized that it had been around for a bit already. My apologies.

      Comment


      • #18
        Originally posted by Maxim Levitsky View Post
        How its possible to write an article and not mention the fact that in essence all NV50 class cards, have feature party with binary driver now....
        No stable and enabled by default power management, no multi-card or SLI support.

        Comment


        • #19
          Originally posted by smitty3268 View Post
          I don't think you can reasonably take the output of a vertex shader, use it as input to a CL kernel, and use that output into a fragment shader. (i think that's the pipeline, right?) At least not with any kind of reasonable speed.

          Drivers shouldn't fake hardware support when it can't be done fast. Otherwise, it's impossible for applications to tell whether or not something works well, and you end up with applications calling all sorts of fancy API calls that slow down the application to sub 1fps speeds. While if the driver correctly states that something isn't implemented, the application can use an older technique or just skip the extra calls entirely and you end up with an application running at reasonable speeds.

          If your hardware driver doesn't support a new enough API, you should just use LLVMPipe from the start, because it's likely to give you just as good of speed as a hardware driver falling back to software fallbacks is.
          Theoretically I think it could be done abusing transform feedback.

          Pipeline: Vertex Shader -> Tessellation(Control shader -> Tessellator -> Evaluation shader) -> Geometry Shader -> Rasterization -> Fragment Shader

          Get vertex shader result into transform feedback buffer. Process buffer with OpenCL tesselation kernel (translate tessellation shaders to OpenCL will be the hard part I guess...). Feed buffer with pass-through vertex shader to finally draw the geometry.

          Comment


          • #20
            without really interest

            without reclocking/dpm and better performance this is not really important, but good work for the devs, and F*** you nvidia

            Comment


            • #21
              Originally posted by werfu View Post
              Couldn't tessalation be implented manually using an OpenCL kernel? That's what I was thinking of. It would be slower than having it hardware accelerated, but it could still be possible using software.
              You can emulate tesselation in the geometry shader. Dunno why you would want to..

              Comment


              • #22
                Originally posted by agd5f View Post
                Is there any major work missing? Or in other words: Could you give us a small briefup what's hindering this code from being mainlined?

                Other than that I just hope the codes are ready soon. It's a little shame seeing that all others have it before r600g.

                Comment


                • #23
                  Originally posted by curaga View Post
                  You can emulate tesselation in the geometry shader. Dunno why you would want to..
                  Why so serious, about tessalation? Yes, in the benchmarks (Unigine) looks good, but from games I have seen, with/without tessalation visual difference is not that great, but it is major FPS killer, and playing game the last thing I do is looking at the rocks on the road and say: "What a rocks, maaaan!"

                  Comment


                  • #24
                    Originally posted by TAXI View Post
                    Is there any major work missing? Or in other words: Could you give us a small briefup what's hindering this code from being mainlined?

                    Other than that I just hope the codes are ready soon. It's a little shame seeing that all others have it before r600g.
                    It has two main problems,

                    a) gpu hangs with one or two tests left to track down

                    b) doesn't work yet with r600g sb backend or llvm backends, the first is a stopper, the second I'm not sure I care enough about yet.

                    With these fixed and the code cleaned up a bit, it should be fine to merge.

                    Dave.

                    Comment


                    • #25
                      Originally posted by Maxim Levitsky View Post
                      Boy, I am really disappointed in YKW....
                      How its possible to write an article and not mention the fact that in essence all NV50 class cards, have feature party with binary driver now....
                      I don't think that is true. They may support the same gl version but I'd imagine nvidia's driver includes some extensions.
                      Regardless you're right that it should've been mentioned if true.

                      Comment


                      • #26
                        Originally posted by airlied View Post
                        a) gpu hangs with one or two tests left to track down
                        Indeed, I launched Metro: Last Light and GPU hanged (without sb).

                        Comment


                        • #27
                          Originally posted by curaga View Post
                          You can emulate tesselation in the geometry shader. Dunno why you would want to..
                          Perhaps to allow some games that use tesselation shaders to run, albeit at a slower framerate considering the software fallbacks that are needed for supporting the extension that's unimplemented or poorly implemented in the hardware. That would be a crutch until the user upgrades the card to one that does implement the h/w necessary to support those GL4 extensions.

                          Comment


                          • #28
                            Originally posted by DeepDayze View Post
                            Perhaps to allow some games that use tesselation shaders to run, albeit at a slower framerate considering the software fallbacks that are needed for supporting the extension that's unimplemented or poorly implemented in the hardware. That would be a crutch until the user upgrades the card to one that does implement the h/w necessary to support those GL4 extensions.
                            Is there any game out there that actually requires tessellation shaders though? I think Crysis is the one example people come up with, and it's very definitely an optional feature that could be turned off if your card doesn't support it. Better to just turn it off than have one section of the game run like molasses.

                            Comment


                            • #29
                              Originally posted by curaga View Post
                              You can emulate tesselation in the geometry shader. Dunno why you would want to..
                              Geometry shaders aren't as efficient as dedicated hardware tessellation units for high tessellation factors and you will notice a huge FPS drop. They work best with generating simple quads (particle effects) or cube maps on the fly.

                              OpenCL is not any better in this regard because of buffer ping pongs and stalls in rendering pipeline (aside from dealing with buggy and inconsistent OCL implementations).

                              However there is dedicated hardware tessellation unit (limited to 16x tessellation factor vs 64x on OGL4 capable hardware) on AMD RadeonHD 2/3/4K series which are OGL3 cards (TruForm successor?) . It will be fantastic to have support for this extension in r600g: AMD_vertex_shader_tessellator

                              Comment


                              • #30
                                Originally posted by Maxim Levitsky View Post
                                Boy, I am really disappointed in YKW....
                                How its possible to write an article and not mention the fact that in essence all NV50 class cards, have feature party with binary driver now....
                                Because feature parity requires CUDA, OpenCL and video decode acceleration hardware support. OpenCL only covers the 3d side.

                                Comment

                                Working...
                                X