Announcement

Collapse
No announcement yet.

Unigine Heaven On Linux In A Month Or Two

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    I tried making a 3d animation once that used tesselation (specifically Kai Kostock's custom build of Blender) to show an ocean scene that went from right by the camera to hundreds of km's away. It worked well, with one major problem. You would see the "pops" between frames where something went from lower to higher resolution (or the other way around).

    Oh, for the record I'm actually no fan of death-match shooters (power to people who are). My first comment was morejust pointing out that I'd buy a game that looked like that even if they didn't actually add a huge amount (like a 100 page story covering 50 characters Final Fantasy style).

    Any new game that looked that good on Linux would be a good thing. I'd love to see a platformer myself, but I'd still buy a FPS deathmatch shooter.

    Comment


    • #42
      Originally posted by mtippett View Post
      You deal with a simpler model in your engine. Collision, transforms, movement, etc are all done on a simple < 10k model.
      You don't do this since ages anyways. You don't interact with a full mesh in a physics simulation anyways since triangle meshes are bad for physics. Also for movement you don't work with meshes themselves you work with analytic bodies. So point denied.
      You need to carry in a lot of cases less content on the host side, particularly with procedural tesselation
      To support tessellation you need to bring in extra geometry information. You just go from one type of added information to another. No real gain in all this.
      You push *way* less geometry around. The GPU renders millions of triangles, your engine pushes 100's.
      For static geometry you push this data around exactly once. Wow, what a great gain. For dynamic content this applies but there we get into another set of big problems with tessellation.
      If you have shadows (like Unigine does), your shader based light model can interact with the actual tesselated geometry.
      It's called Deferred Rendering, known since a long time and works by separating geometry production from lighting. Lighting/Shading is already separate from geometry creation so point denied. Besides there exist already techniques to produce self shadowing. Another point is that shadow shaders are typically vertex bound. Increasing geometry burns your card without delivering much result.

      There are situations where tessellation can be interesting but splattering it all across the screen is a waste especially if the rest is looking worse since the render time is missing for that.

      Comment


      • #43
        Originally posted by Mr_Alien_Overlord View Post
        It worked well, with one major problem. You would see the "pops" between frames where something went from lower to higher resolution (or the other way around).
        That's why you don't switch from one tessellation level to the next immediately. The Heaven benchmark smoothly blends the levels together so you don't get popping.

        Originally posted by Dragonlord View Post
        To support tessellation you need to bring in extra geometry information. You just go from one type of added information to another. No real gain in all this.
        Where in the pipeline you add the extra detail matters though. You already need a normal map in most games, and that's usually all you need in the way of extra information to tessellate. You also save a lot of bandwidth and vertex shader work by tessellating the geometry after the vertex shader. For skinning for instance this is a huge win since you only have to animate the low res mesh, regardless of actual detail level on screen. It does add hardware load, of course, but calling it "no real gain" is silly.

        Originally posted by Dragonlord View Post
        For static geometry you push this data around exactly once. Wow, what a great gain.
        Nonsense. You pay for it every time you draw the geometry. It's cheaper to expand the data after the vertex shader stage than reading all that geometry from memory every time. ATI hardware is already generally bandwidth bound, so stuff like this helps.

        Originally posted by Dragonlord View Post
        It's called Deferred Rendering, known since a long time and works by separating geometry production from lighting. Lighting/Shading is already separate from geometry creation so point denied.
        Parallax mapping, which you'll need to produce results similar to tessellation even in a deferred renderer, can't produce results nearly as good as properly tessellated geometry unless you go completely overboard with normal map samples. You'll have warping and heavy aliasing and it won't play nice with multi-sampling either. Tessellation is just better.

        "Points denied". Am I doing it right?

        Comment


        • #44
          Originally posted by wien View Post
          Where in the pipeline you add the extra detail matters though. You already need a normal map in most games, and that's usually all you need in the way of extra information to tessellate. You also save a lot of bandwidth and vertex shader work by tessellating the geometry after the vertex shader. For skinning for instance this is a huge win since you only have to animate the low res mesh, regardless of actual detail level on screen. It does add hardware load, of course, but calling it "no real gain" is silly.
          For the actual rendering it doesn't matter. Tessellation work is not for free. Hence it doesn't really matter if you have a vertex shader which transforms geometry or a tessellation step afterwards which has to tap into texture units and manipulating triangles producing new triangles.

          Nonsense. You pay for it every time you draw the geometry. It's cheaper to expand the data after the vertex shader stage than reading all that geometry from memory every time. ATI hardware is already generally bandwidth bound, so stuff like this helps.
          Maybe it's cheaper after the vertex stage to fuss with geometry but you fuzz with a lot more geometry than otherwise. Many small things can be as expensive as a few heavy things.

          Parallax mapping, which you'll need to produce results similar to tessellation even in a deferred renderer, can't produce results nearly as good as properly tessellated geometry unless you go completely overboard with normal map samples. You'll have warping and heavy aliasing and it won't play nice with multi-sampling either. Tessellation is just better.
          You contradict yourself. First you say that tessellation is based on the normal map information and now you say the resolution is too low to do a proper normal mapping. The only difference between the two techniques is only that with tessellation objects stick out of the surface not just look like ( from a steep angle ). But this sticking out causes a lot of other problems since objects align with the non-tessellated geometry but rendering is with. Clipping and penetration are the ugly results of this. But in general if the geometry and normal maps are done properly the difference to full tessellation is not huge but the speed difference is.

          Comment


          • #45
            Originally posted by Dragonlord View Post
            You contradict yourself. First you say that tessellation is based on the normal map information and now you say the resolution is too low to do a proper normal mapping.
            I said nothing about resolution. I have no idea where you're getting that from.

            I said you need to do a huge amount of texture samples per pixel to have accurate parallax mapping. You're basically raytracing the height/normal map and that needs a fair amount of samples to be accurate. With tessellation you do one sample per post-tesselation vertex in the domain? shader (i forget) and displace the vertex based on that.

            And that's even ignoring the aliasing problems you have with parallax mapping, especially around silhouettes. Tessellated geometry benefits from MSAA just like any other geometry and as such looks much better. With parallax mapping you basically have to supersample every height-map lookup, and you still can't do anything about the silhouettes (if you're even doing them properly, which today is unusual.)

            Originally posted by Dragonlord View Post
            The only difference between the two techniques is only that with tessellation objects stick out of the surface not just look like ( from a steep angle ). But this sticking out causes a lot of other problems since objects align with the non-tessellated geometry but rendering is with. Clipping and penetration are the ugly results of this.
            That depends entirely on how you build your collision mesh. It's already usual to have a separate low-res collision mesh so it's not exactly difficult to take displacement into consideration when building that.

            As for speed, yes parallax mapping the way it's implemented in most engines these days is probably faster, but it also looks considerably worse. Apples and oranges. Tessellation provides much better quality for a minor performance hit (comparatively speaking.)

            Comment


            • #46
              Why are there such major bugs in the driver, that an techdemo can't run and will therefore not be released? Why doesn't the target audience of FGLRX (which is not Joe Sixpack, but professional CAD) complain? Or are all those using QuadroFX cards?

              Btw: Tweo days ago I ran the Tropics and Sanctuary demo in fullscreen and windowed mode with compiz enabled under catalyst 9.10 and did not have any problems
              Last edited by Hasenpfote; 25 October 2009, 12:50 PM.

              Comment


              • #47
                Originally posted by Hasenpfote View Post
                Why are there such major bugs in the driver, that an techdemo can't run and will therefore not be released? Why doesn't the target audience of FGLRX (which is not Joe Sixpack, but professional CAD) complain? Or are all those using QuadroFX cards?

                Btw: Tweo days ago I ran the Tropics and Sanctuary demo in fullscreen and windowed mode with compiz enabled under catalyst 9.10 and did not have any problems
                The problems are with the new DX11 features that AMD is creating OpenGL extensions for, not anything that current applications use.

                Comment


                • #48
                  Originally posted by smitty3268 View Post
                  The problems are with the new DX11 features that AMD is creating OpenGL extensions for, not anything that current applications use.
                  Thank you for clarification!

                  Comment


                  • #49
                    Originally posted by Hasenpfote View Post
                    Why are there such major bugs in the driver, that an techdemo can't run and will therefore not be released? Why doesn't the target audience of FGLRX (which is not Joe Sixpack, but professional CAD) complain? Or are all those using QuadroFX cards?
                    Actually, the FireGL crowd is the very reason we HAVE the drivers in the first place. It's just happy happenstance they do good all over anyhow.

                    Having said this, much of the CAD space doesn't even remotely USE the same driver paths as the game-centric type rendering uses. Most CAD programs use immediate mode rendering calls, most modern game engines (but not all...) use vertex-arrray/VBO and shader based rendering. While most modern cards use shaders all the way through, the basics are only really ever used with a CAD program and may not see the same issues something like Unigine will present with a given driver. They're not testing against it as much because AMD still is worrying about their CAD customers that they're providing the drivers for and not so much gaming, etc.- because we're not perceived to be that market segment to them in the large right at the moment. Sort of a chicken and the egg problem- one I hope Gallium3D will come along quick enough to make the problem less of an issue and show there IS a market for them and they need to worry more about the top-end performance while helping the community maintain about 70% or so of the peak as an out of box experience to goad their sales on upward.

                    Comment


                    • #50
                      Originally posted by smitty3268 View Post
                      The problems are with the new DX11 features that AMD is creating OpenGL extensions for, not anything that current applications use.
                      Which wouldn't surprise me. Fortunately, the code-core is identical except for the edges where it ties into Linux, so it will be little time once the issues are sorted out on that end.

                      Comment

                      Working...
                      X