Announcement

Collapse
No announcement yet.

AMD's Open-Source Radeon Driver After Four Years

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • curaga
    replied
    Originally posted by BlackStar View Post
    As far as I know, the filter has two passes: edge detection and blurring. The kernel for each pass gets executed exactly once for each pixel on the framebuffer, regardless of the amount of edges in the scene (unlike MSAA which gets executed per edge). Now, performance might be somewhat correlation with the amount of edges if the shader uses branches (which I doubt) but even if this is the case... 12 times faster than MSAA 8x? This wouldn't happen if the amount of edges affected MLAA significantly.
    Three passes, and only the first (edge detection) is executed for all pixels. The second and third pass are only executed for the edge-marked pixels (via the stencil), which brings a big speedup compared to running them on all pixels.

    The second pass is what makes it so much better in quality than many of the other implementations; instead of a constant blur, it depends on the type of aliasing. I recommend the fxaa vs mlaa article @ digitalfoundry, you can see how much more fxaa blurs.

    Leave a comment:


  • smitty3268
    replied
    Originally posted by BlackStar View Post
    As far as I know, the filter has two passes: edge detection and blurring. The kernel for each pass gets executed exactly once for each pixel on the framebuffer, regardless of the amount of edges in the scene (unlike MSAA which gets executed per edge). Now, performance might be somewhat correlation with the amount of edges if the shader uses branches (which I doubt) but even if this is the case... 12 times faster than MSAA 8x? This wouldn't happen if the amount of edges affected MLAA significantly.
    There appear to be 3 passes, actually: Edge detection, blend weights, and smooth edges.

    Also, i think the 2nd and 3rd passes are based on edges, or at least are optimized to a degree for non-edges. But I'm not very familiar with graphics programming.

    The implementation for Mesa is in TGSI here: http://lists.freedesktop.org/archive...st/010629.html

    which is based on the Jimenez MLAA code, which you can view in a more readable DX10 version here: https://github.com/iryoku/jimenez-ml...haders/MLAA.fx

    Maybe someone smarter than me can make more sense out of it.

    Leave a comment:


  • BlackStar
    replied
    Originally posted by liam View Post
    I'm very (pleasantly) surprised this is coming to Linux so quickly.I also like the idea of MLAA. A nice, simple post-processing idea that can be applied to any scene (though, unlike BlackStar I don't think it is variable only across resolution since it seems very much edge/color contrast dependent).
    As far as I know, the filter has two passes: edge detection and blurring. The kernel for each pass gets executed exactly once for each pixel on the framebuffer, regardless of the amount of edges in the scene (unlike MSAA which gets executed per edge). Now, performance might be somewhat correlation with the amount of edges if the shader uses branches (which I doubt) but even if this is the case... 12 times faster than MSAA 8x? This wouldn't happen if the amount of edges affected MLAA significantly.

    Leave a comment:


  • smitty3268
    replied
    Originally posted by Qaridarium
    "cutting edge feature"

    LOL ??? not really its a poor man feature real man get this: Rotated Sample-Super-Sample-AA RS-SSAAx8

    but this killer cutting edge feature kill all hardware thats why they do MLAA the poor mans AA..
    This is a strange definition of cutting edge. By that logic, the new smartphones and tablets this year can't be cutting edge because i can already buy a 3Ghz PC.

    Leave a comment:


  • liam
    replied
    Originally posted by Qaridarium
    "cutting edge feature"

    LOL ??? not really its a poor man feature real man get this: Rotated Sample-Super-Sample-AA RS-SSAAx8

    but this killer cutting edge feature kill all hardware thats why they do MLAA the poor mans AA..
    Cutting edge in that it is new, looks pretty much as good as MSAA and is apparently faster.

    Leave a comment:


  • marek
    replied
    Originally posted by Sidicas View Post
    I certainly DON'T think that everybody should move over to Gallium3D drivers on older hardware as for many people, that'd be a loss of features (no MSAA, no Hyper-Z, less 3D performance, etc.)... User's shouldn't have to sacrifice features...
    I can tell you for sure that even though MSAA is missing and most of HyperZ is disabled by default (you can enable it though) and the performance isn't at 100% of the level of Catalyst, the open r300g driver has much more features in total. There are over 50 more OpenGL extensions in r300g than in Catalyst 9.3 on the same hardware. That's a ton of features. See for yourself.

    Leave a comment:


  • liam
    replied
    Originally posted by BlackStar View Post
    The open-source drivers recently gained support for MLAA, an antialiasing post-processing filter. This is similar to techniques utilised in modern games which do not support MSAA (for various technical reasons that go beyond this discussion).

    In short, the open-source drivers have now reached feature parity with 9.3, are more stable, offer faster 2d, slightly slower but more featureful 3d (closer to GL 3.0 vs GL 2.1 in fglrx 9.3). Work is also underway for video acceleration. Soon, there'll be little reason to not use the open-source stack on older hardware.
    I'd never heard of MLAA before yesterday and so I did a little research about it. WOW! I had no idea it was such a cutting edge feature. The technique was only proposed in, IIRC, 2007 by this guy working at intel. Another article I read was by a developer for GoW3 and had what was apparently the first feasible (fairly fast and quality) implementation of MLAA.
    I'm very (pleasantly) surprised this is coming to Linux so quickly.I also like the idea of MLAA. A nice, simple post-processing idea that can be applied to any scene (though, unlike BlackStar I don't think it is variable only across resolution since it seems very much edge/color contrast dependent).
    Last edited by liam; 17 August 2011, 05:31 PM.

    Leave a comment:


  • pingufunkybeat
    replied
    Originally posted by drag View Post
    yeah right. How competitive do you think their proprietary driver is against Nvidia's?
    According to Bridgman, they have many professional customers with lots of workstations. Which is why they are doing the Catalyst drivers, not to make Linux nerds happy. And these customers don't care much for desktop effects, XVideo tearing and other problems Catalyst has, and run stable distro where Catalyst is tested.

    (hint: They are not doing OSS driver support just to make Linux nerds happy.)
    Actually, they are doing it for the embedded market, as far as I know, because the big customers there asked for it. I'm sure Bridgman will correct me if I'm wrong.

    Personally, I've never even tried Catalyst, that's how much I care about it. OSS driver is the only thing that I'm interested in, and the more devs working on it, the better. But it's a pipe dream that AMD will give up on 20 years of optimisations and know-how in their Catalyst driver just so they can lose their entire workstation market, and move three guys over to the OSS team.

    Leave a comment:


  • Sidicas
    replied
    Originally posted by smitty3268 View Post
    It's my impression that AMD dropping linux fglrx support would likely lead to LESS linux support of the OSS driver - likely a complete abandonment of all Linux support - rather than more. But we'll never know for sure, because it's not going to happen.
    Unless the reason for dropping the fglrx support is that the open source drivers are better in every way than the Catalyst drivers... Unfortunately AMD/ATI has shown that they'll drop fglrx support on hardware long before the open source drivers are mature... They've done it in the past and I wouldn't put it by them to do it again...

    Leave a comment:


  • smitty3268
    replied
    Originally posted by drag View Post
    I think if you pay close attention to what you written it may make it a bit more obvious why this approach may be very suboptimal from a Linux standpoint.

    Anyways the evidence speaks for itself. The catalyst driver is much older, has a much higher budget and workforce behind it and yet it is still one of the worst graphics drivers available for Linux.
    It's my impression that AMD dropping linux fglrx support would likely lead to LESS linux support of the OSS driver - likely a complete abandonment of all Linux support - rather than more. But we'll never know for sure, because it's not going to happen.

    Leave a comment:

Working...
X