Def-Ren short for Deferred Rendering or sometimes called Deferred Shading. A technique for rendering scenes. Sort of the opposite of Forward Rendering. Def-Ren allows to render scenes which parameters Forward Rendering can not handle. AA as an effect (Def-Ren is a render technique not an effect) is designed and works with Forward Rendering but doesn't work with Def-Ren since the rendering process is decoupled (depth pass, geometry pass, lighting pass, post-process pass and so forth). Def-Ren though is the future. There exists hacks misusing MSBs (Multi Sampling Buffers) trying to simulate AA but in general they are even worse than AA itself. But that's not really a problem since as mentioned AA doesn't look good so it won't be missed in the long run.
Thanks for the explanation.
- is there any other way then to getting rid of the jagged edges?
- regardless of rendering tech, the picture has to come to the screen at once. I would believe those AA techniques that work at that part would still work?
AA works by applying a blur to objects if the fit certain parameters. This especially requires existing geometry in the screen buffer (depth buffer) so when you render new geometry into it AA can try to figure out where to blur and how strong. With Def-Ren though you first render a depth buffer during the depth pass. Afterwards you render the geometry informations (diffuse, normals and so forth) using the depth buffer as quick-reject test. Hence you write the parameters only once per pixel for the actual surface hitting this place. This means though that you have only the informations of the current surface pixel on this point and no additional information which AA though requires to figure out anything meaningful. The strength of Def-Ren lies though in the fact that overdraw is reduced to 1 whereas AA needs >1.
One solution different from the MSB approach is to use a post-processing shader applying a blur by comparing depths from the depth buffer in a small area (9-pixels around pixel of interest for example). You sort of apply an edge-detection filter on the depth values and using the resulting value as blur strength. It cuts performance but blurs where depth discontinuities arise. I experimented once with that approach but for my taste it cuts performance too much. One can say though it's better than nothing. Bigger problems arise though if you go into transparency so you would have to apply this shader-AA after each render pass which is not that cheap. Maybe though something could be done using a sort of "fake AA" shader. Would involve a 1-time down-scale (half-size image) which would give a blur for free and applying the image at different strength depending on depth discontinuity. Shaders have a dFx and dFy instruction which could be misused for that perhaps. Never tried this one out though.
Supersampling AA would be the best choice here: Render everything at 7680x5320 then downsample the final output to 1920x1080. No jaggies, but you need a *monster GPU* to get playable FPS.
I only see a minimal amount of jaggies in that screenshot. On a 130+ DPI screen, there's no need for AA. Smaller screen with high DPI = win
OpenGL 4 + tesselation running great on linux or on winshit? Sorry i had to ask!
Originally Posted by d2kx
Yesterday i compared Unigine Heaven 2.1 Win7+Linux. DX11, OpenGL 4 with Tessellation on - ATI 10-6 driver, HD 5670, i5-680 (3.6 GHz) for all:
Win7 DX 11:
Min FPS: 3.6
Max FPS: 46.3
Win7 OpenGL 4:
Min FPS: 3.6
Max FPS: 44.8
Linux (squeeze+2.6.35rc4 kernel) + OpenGL 4:
Min FPS: 4.8
Max FPS: 45.8
So what is ATI trying to say us? Tessellation on Win SLOWER than on Linux using OpenGL? Usually Win drivers are faster as you see with DX 11... I would like to compare with a Nv DX 11 card, but i don't have got one.
The tested res was 1280x1024 fullscreen in all cases with default settings (tesselation normal).
The only FOSS drivers this is probably going to run on is going to be the ones everyone's gritching about the mediocre performance on- sometime within the next 2 years, that is... Unigine's stuff brutalizes the most powerful hardware at it's max settings- and it doesn't appear to be doing many idiot things with the hardware.
Originally Posted by curaga
Originally Posted by Kano
It can be. There were instances where OpenGL on MacOS or Linux outpaced some of the DX10 stuff back when it rolled out and there was comparable interfaces made available. It depends on what you're doing inside the state engine and what it does to your instruction streams to the GPU that determines what the actual speed of something is.
Tags for this Thread