Announcement

Collapse
No announcement yet.

AMD's Open-Source Radeon Driver After Four Years

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by drag View Post
    I think if you pay close attention to what you written it may make it a bit more obvious why this approach may be very suboptimal from a Linux standpoint.

    Anyways the evidence speaks for itself. The catalyst driver is much older, has a much higher budget and workforce behind it and yet it is still one of the worst graphics drivers available for Linux.
    It's my impression that AMD dropping linux fglrx support would likely lead to LESS linux support of the OSS driver - likely a complete abandonment of all Linux support - rather than more. But we'll never know for sure, because it's not going to happen.

    Comment


    • #17
      Originally posted by smitty3268 View Post
      It's my impression that AMD dropping linux fglrx support would likely lead to LESS linux support of the OSS driver - likely a complete abandonment of all Linux support - rather than more. But we'll never know for sure, because it's not going to happen.
      Unless the reason for dropping the fglrx support is that the open source drivers are better in every way than the Catalyst drivers... Unfortunately AMD/ATI has shown that they'll drop fglrx support on hardware long before the open source drivers are mature... They've done it in the past and I wouldn't put it by them to do it again...

      Comment


      • #18
        Originally posted by drag View Post
        yeah right. How competitive do you think their proprietary driver is against Nvidia's?
        According to Bridgman, they have many professional customers with lots of workstations. Which is why they are doing the Catalyst drivers, not to make Linux nerds happy. And these customers don't care much for desktop effects, XVideo tearing and other problems Catalyst has, and run stable distro where Catalyst is tested.

        (hint: They are not doing OSS driver support just to make Linux nerds happy.)
        Actually, they are doing it for the embedded market, as far as I know, because the big customers there asked for it. I'm sure Bridgman will correct me if I'm wrong.

        Personally, I've never even tried Catalyst, that's how much I care about it. OSS driver is the only thing that I'm interested in, and the more devs working on it, the better. But it's a pipe dream that AMD will give up on 20 years of optimisations and know-how in their Catalyst driver just so they can lose their entire workstation market, and move three guys over to the OSS team.

        Comment


        • #19
          Originally posted by BlackStar View Post
          The open-source drivers recently gained support for MLAA, an antialiasing post-processing filter. This is similar to techniques utilised in modern games which do not support MSAA (for various technical reasons that go beyond this discussion).

          In short, the open-source drivers have now reached feature parity with 9.3, are more stable, offer faster 2d, slightly slower but more featureful 3d (closer to GL 3.0 vs GL 2.1 in fglrx 9.3). Work is also underway for video acceleration. Soon, there'll be little reason to not use the open-source stack on older hardware.
          I'd never heard of MLAA before yesterday and so I did a little research about it. WOW! I had no idea it was such a cutting edge feature. The technique was only proposed in, IIRC, 2007 by this guy working at intel. Another article I read was by a developer for GoW3 and had what was apparently the first feasible (fairly fast and quality) implementation of MLAA.
          I'm very (pleasantly) surprised this is coming to Linux so quickly.I also like the idea of MLAA. A nice, simple post-processing idea that can be applied to any scene (though, unlike BlackStar I don't think it is variable only across resolution since it seems very much edge/color contrast dependent).
          Last edited by liam; 08-17-2011, 05:31 PM.

          Comment


          • #20
            Originally posted by liam View Post
            I'd never heard of MLAA before yesterday and so I did a little research about it. WOW! I had no idea it was such a cutting edge feature. The technique was only proposed in, IIRC, 2007 by this guy working at intel. Another article I read was by a developer for GoW3 and had what was apparently the first feasible (fairly fast and quality) implementation of MLAA.
            I'm very (pleasantly) surprised this is coming to Linux so quickly.
            "cutting edge feature"

            LOL ??? not really its a poor man feature real man get this: Rotated Sample-Super-Sample-AA RS-SSAAx8

            but this killer cutting edge feature kill all hardware thats why they do MLAA the poor mans AA..

            Comment


            • #21
              Originally posted by Sidicas View Post
              I certainly DON'T think that everybody should move over to Gallium3D drivers on older hardware as for many people, that'd be a loss of features (no MSAA, no Hyper-Z, less 3D performance, etc.)... User's shouldn't have to sacrifice features...
              I can tell you for sure that even though MSAA is missing and most of HyperZ is disabled by default (you can enable it though) and the performance isn't at 100% of the level of Catalyst, the open r300g driver has much more features in total. There are over 50 more OpenGL extensions in r300g than in Catalyst 9.3 on the same hardware. That's a ton of features. See for yourself.

              Comment


              • #22
                Originally posted by Qaridarium View Post
                "cutting edge feature"

                LOL ??? not really its a poor man feature real man get this: Rotated Sample-Super-Sample-AA RS-SSAAx8

                but this killer cutting edge feature kill all hardware thats why they do MLAA the poor mans AA..
                Cutting edge in that it is new, looks pretty much as good as MSAA and is apparently faster.
                http://www.youtube.com/watch?v=d31oi1OOKbM

                Comment


                • #23
                  Originally posted by Qaridarium View Post
                  "cutting edge feature"

                  LOL ??? not really its a poor man feature real man get this: Rotated Sample-Super-Sample-AA RS-SSAAx8

                  but this killer cutting edge feature kill all hardware thats why they do MLAA the poor mans AA..
                  This is a strange definition of cutting edge. By that logic, the new smartphones and tablets this year can't be cutting edge because i can already buy a 3Ghz PC.

                  Comment


                  • #24
                    Originally posted by liam View Post
                    I'm very (pleasantly) surprised this is coming to Linux so quickly.I also like the idea of MLAA. A nice, simple post-processing idea that can be applied to any scene (though, unlike BlackStar I don't think it is variable only across resolution since it seems very much edge/color contrast dependent).
                    As far as I know, the filter has two passes: edge detection and blurring. The kernel for each pass gets executed exactly once for each pixel on the framebuffer, regardless of the amount of edges in the scene (unlike MSAA which gets executed per edge). Now, performance might be somewhat correlation with the amount of edges if the shader uses branches (which I doubt) but even if this is the case... 12 times faster than MSAA 8x? This wouldn't happen if the amount of edges affected MLAA significantly.

                    Comment


                    • #25
                      Originally posted by BlackStar View Post
                      As far as I know, the filter has two passes: edge detection and blurring. The kernel for each pass gets executed exactly once for each pixel on the framebuffer, regardless of the amount of edges in the scene (unlike MSAA which gets executed per edge). Now, performance might be somewhat correlation with the amount of edges if the shader uses branches (which I doubt) but even if this is the case... 12 times faster than MSAA 8x? This wouldn't happen if the amount of edges affected MLAA significantly.
                      There appear to be 3 passes, actually: Edge detection, blend weights, and smooth edges.

                      Also, i think the 2nd and 3rd passes are based on edges, or at least are optimized to a degree for non-edges. But I'm not very familiar with graphics programming.

                      The implementation for Mesa is in TGSI here: http://lists.freedesktop.org/archive...st/010629.html

                      which is based on the Jimenez MLAA code, which you can view in a more readable DX10 version here: https://github.com/iryoku/jimenez-ml...haders/MLAA.fx

                      Maybe someone smarter than me can make more sense out of it.

                      Comment


                      • #26
                        Originally posted by BlackStar View Post
                        As far as I know, the filter has two passes: edge detection and blurring. The kernel for each pass gets executed exactly once for each pixel on the framebuffer, regardless of the amount of edges in the scene (unlike MSAA which gets executed per edge). Now, performance might be somewhat correlation with the amount of edges if the shader uses branches (which I doubt) but even if this is the case... 12 times faster than MSAA 8x? This wouldn't happen if the amount of edges affected MLAA significantly.
                        Three passes, and only the first (edge detection) is executed for all pixels. The second and third pass are only executed for the edge-marked pixels (via the stencil), which brings a big speedup compared to running them on all pixels.

                        The second pass is what makes it so much better in quality than many of the other implementations; instead of a constant blur, it depends on the type of aliasing. I recommend the fxaa vs mlaa article @ digitalfoundry, you can see how much more fxaa blurs.

                        Comment


                        • #27
                          Originally posted by liam View Post
                          Cutting edge in that it is new, looks pretty much as good as MSAA and is apparently faster.
                          http://www.youtube.com/watch?v=d31oi1OOKbM

                          As a gamer on nvidia hardware (under Windows) I've found that MSAA and CSAA fall short while FXAA and MLAA really nail it..
                          The major problem with MSAA & CSAA is that it doesn't anti-alias shaders very well (if at all)... So you have these very bright pixels on the edges of objects that are caused by shiny objects but the bright pixels aren't anti-aliased properly because the shaders are at a different level than MSAA/CSAA.. So at decent resolutions (1080p) and very high texture & lightning details you can see some course grainy pixels on the texture caused by the bright reflective lighting on the edges of shiny objects (like wet steps).. MLAA and FXAA fixes that while you can crank MSAA and CSAA to the max all day long and it will do nothing to fix those annoying coarse white pixels along the edges of very shiny objects (which can really detract from the realism, IMO).
                          Last edited by Sidicas; 08-18-2011, 08:53 AM.

                          Comment


                          • #28
                            Originally posted by smitty3268 View Post
                            This is a strange definition of cutting edge. By that logic, the new smartphones and tablets this year can't be cutting edge because i can already buy a 3Ghz PC.
                            my point isn't in that way. my point is RS-SSAAx8 beats MLAA in "Quality"

                            RS-SSAAx8 is the best of the best of the best!

                            Comment


                            • #29
                              Originally posted by curaga View Post
                              Three passes, and only the first (edge detection) is executed for all pixels. The second and third pass are only executed for the edge-marked pixels (via the stencil), which brings a big speedup compared to running them on all pixels.
                              I am missing some secret sauce here. How does the stencil get written? Via GL_ARB_stencil_shader_export (is this even supported on Mesa? It requires GLSL 1.40!) And if yes, why doesn't the loss of early z-tests destroy performance?

                              The second pass is what makes it so much better in quality than many of the other implementations; instead of a constant blur, it depends on the type of aliasing. I recommend the fxaa vs mlaa article @ digitalfoundry, you can see how much more fxaa blurs.
                              Thanks, will do.

                              Edit: I missed the part where this is written in TGSI rather than GLSL. But still, the hardware limitations should be identical. Need to think about this some more.

                              Comment


                              • #30
                                Originally posted by BlackStar View Post
                                I am missing some secret sauce here. How does the stencil get written? Via GL_ARB_stencil_shader_export (is this even supported on Mesa? It requires GLSL 1.40!) And if yes, why doesn't the loss of early z-tests destroy performance?
                                Via
                                glClear(GL_STENCIL_BUFFER_BIT);
                                glStencilFunc(GL_ALWAYS, 1, ~0);
                                glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE);
                                // pass 1 here
                                Since pass 1 calls discard for non-edge pixels, the stencil is not marked for those. Early-Z isn't lost this way. This is quite a smart optimization, my jaw dropped too when I first saw it.

                                Comment

                                Working...
                                X