AA works by applying a blur to objects if the fit certain parameters. This especially requires existing geometry in the screen buffer (depth buffer) so when you render new geometry into it AA can try to figure out where to blur and how strong. With Def-Ren though you first render a depth buffer during the depth pass. Afterwards you render the geometry informations (diffuse, normals and so forth) using the depth buffer as quick-reject test. Hence you write the parameters only once per pixel for the actual surface hitting this place. This means though that you have only the informations of the current surface pixel on this point and no additional information which AA though requires to figure out anything meaningful. The strength of Def-Ren lies though in the fact that overdraw is reduced to 1 whereas AA needs >1.
One solution different from the MSB approach is to use a post-processing shader applying a blur by comparing depths from the depth buffer in a small area (9-pixels around pixel of interest for example). You sort of apply an edge-detection filter on the depth values and using the resulting value as blur strength. It cuts performance but blurs where depth discontinuities arise. I experimented once with that approach but for my taste it cuts performance too much. One can say though it's better than nothing. Bigger problems arise though if you go into transparency so you would have to apply this shader-AA after each render pass which is not that cheap. Maybe though something could be done using a sort of "fake AA" shader. Would involve a 1-time down-scale (half-size image) which would give a blur for free and applying the image at different strength depending on depth discontinuity. Shaders have a dFx and dFy instruction which could be misused for that perhaps. Never tried this one out though.
One solution different from the MSB approach is to use a post-processing shader applying a blur by comparing depths from the depth buffer in a small area (9-pixels around pixel of interest for example). You sort of apply an edge-detection filter on the depth values and using the resulting value as blur strength. It cuts performance but blurs where depth discontinuities arise. I experimented once with that approach but for my taste it cuts performance too much. One can say though it's better than nothing. Bigger problems arise though if you go into transparency so you would have to apply this shader-AA after each render pass which is not that cheap. Maybe though something could be done using a sort of "fake AA" shader. Would involve a 1-time down-scale (half-size image) which would give a blur for free and applying the image at different strength depending on depth discontinuity. Shaders have a dFx and dFy instruction which could be misused for that perhaps. Never tried this one out though.
Comment