Def-Ren short for Deferred Rendering or sometimes called Deferred Shading. A technique for rendering scenes. Sort of the opposite of Forward Rendering. Def-Ren allows to render scenes which parameters Forward Rendering can not handle. AA as an effect (Def-Ren is a render technique not an effect) is designed and works with Forward Rendering but doesn't work with Def-Ren since the rendering process is decoupled (depth pass, geometry pass, lighting pass, post-process pass and so forth). Def-Ren though is the future. There exists hacks misusing MSBs (Multi Sampling Buffers) trying to simulate AA but in general they are even worse than AA itself. But that's not really a problem since as mentioned AA doesn't look good so it won't be missed in the long run.
Announcement
Collapse
No announcement yet.
Unigine Is Working On A Strategy Game
Collapse
X
-
Thanks for the explanation.
- is there any other way then to getting rid of the jagged edges?
- regardless of rendering tech, the picture has to come to the screen at once. I would believe those AA techniques that work at that part would still work?
Comment
-
AA works by applying a blur to objects if the fit certain parameters. This especially requires existing geometry in the screen buffer (depth buffer) so when you render new geometry into it AA can try to figure out where to blur and how strong. With Def-Ren though you first render a depth buffer during the depth pass. Afterwards you render the geometry informations (diffuse, normals and so forth) using the depth buffer as quick-reject test. Hence you write the parameters only once per pixel for the actual surface hitting this place. This means though that you have only the informations of the current surface pixel on this point and no additional information which AA though requires to figure out anything meaningful. The strength of Def-Ren lies though in the fact that overdraw is reduced to 1 whereas AA needs >1.
One solution different from the MSB approach is to use a post-processing shader applying a blur by comparing depths from the depth buffer in a small area (9-pixels around pixel of interest for example). You sort of apply an edge-detection filter on the depth values and using the resulting value as blur strength. It cuts performance but blurs where depth discontinuities arise. I experimented once with that approach but for my taste it cuts performance too much. One can say though it's better than nothing. Bigger problems arise though if you go into transparency so you would have to apply this shader-AA after each render pass which is not that cheap. Maybe though something could be done using a sort of "fake AA" shader. Would involve a 1-time down-scale (half-size image) which would give a blur for free and applying the image at different strength depending on depth discontinuity. Shaders have a dFx and dFy instruction which could be misused for that perhaps. Never tried this one out though.
Comment
-
-
Yesterday i compared Unigine Heaven 2.1 Win7+Linux. DX11, OpenGL 4 with Tessellation on - ATI 10-6 driver, HD 5670, i5-680 (3.6 GHz) for all:
Win7 DX 11:
FPS: 17.0
Scores: 427
Min FPS: 3.6
Max FPS: 46.3
Win7 OpenGL 4:
FPS: 12.3
Scores: 310
Min FPS: 3.6
Max FPS: 44.8
Linux (squeeze+2.6.35rc4 kernel) + OpenGL 4:
FPS: 14.8
Scores: 373
Min FPS: 4.8
Max FPS: 45.8
So what is ATI trying to say us? Tessellation on Win SLOWER than on Linux using OpenGL? Usually Win drivers are faster as you see with DX 11... I would like to compare with a Nv DX 11 card, but i don't have got one.
Comment
-
Originally posted by curaga View PostTypo: sah -> said
On topic: dat pics
Looking good as always. If the mechanics are good, and it runs on FOSS drivers, I'm buying
Comment
-
Originally posted by Kano View PostSo what is ATI trying to say us? Tessellation on Win SLOWER than on Linux using OpenGL?
It can be. There were instances where OpenGL on MacOS or Linux outpaced some of the DX10 stuff back when it rolled out and there was comparable interfaces made available. It depends on what you're doing inside the state engine and what it does to your instruction streams to the GPU that determines what the actual speed of something is.
Comment
Comment