Announcement

Collapse
No announcement yet.

Linux and Mac are screwed

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by V!NCENT View Post
    The latest one came around after Vista was released, because NT6.x is in it as well.


    There four things I have to say about that:
    -New research keeps going, ERPT combines Monte Carlo with Metropolis Light Transport (so it's fully perfect) and creates good visible frames much faster. There are ofcourse some artifacts in the first few updates, but filters can fix that. There are papers on that too. Research is actually going crazy still.
    -Atomontage engine shows that you can use voxels on GPU's by converting them to triangles first. This is real-time.
    -Intel research shows that because of the lower screen estate, you can get ray tracing there more easily because of fewer pixels that are not realy that awefull on small screens.
    -Shaders units.
    I should probably clarify what I meant - rasterisation vs ray tracing, hardware wise, rasterisation wins out. That's why it's being used. As a data structure for generating what the raster is applied to, however, voxels do have a lot going for them. Excellent at volumetric work, if you can get around the box effect.

    John Carmack is realy smart, yet all his tricks rely on the same principle; the trick is in the eye of the beholder. For example side scrolling wasn't possible on PC's because it didn't have the power to update all the pixels, so with Commander Keen he proved that it was possible to only update the pixels that you need to see. With Wolf3D all the way to the Megatexture. He eliminates calculations rather than speed up algorithms and code. Like with lossy audio codecs; strip away what can't be heared/seen easily, or at all.
    He'll either eliminate things, or use modern hardware to do things previously not permitted, or just look at things a little differently. Megatexture I'll get on to in a moment...


    With ray tracing his trick is indeed culling. Very smart culling like with sparse voxel octrees; reduce geometry to the amount of voxels that matches the amount of pixels that it takes up on the screen. That way you get perfect geometry everywhere, but with the speed of the Wolfenstein3D engine path tracing, but then in 3D.

    This is possible with the megatexture technique. His megatexture uses pixels for textures. Voxels are 3D pixels, so he applies that kind of streaming to get awesome image quality, detail and diversity, while limiting the amount of traceable rays.
    I'm wondering if he'll be using the a sparse voxel octree for ray traced rendering, or more just as an advanced LOD system for traditional rasterisation methods. I got the feeling that he meant the latter, which makes a fair bit of sense. Current techniques are seeing more and more decoupling of geometry and final image, so thinking of geometry as just triangles may no longer be required (especially with the latest graphics card power). I can see it fitting in well with the latest megatexture technology. Just a case of figuring out what detail levels are need on the screen, and making sure the graphics card has them loaded.
    As far as I'm aware, the ET:QW megatexture was a vastly simplified version of picking out what's needed on screen - I do something similar for terrain rendering, I believe - but Rage is quite a good deal more advanced (probably uses a texture atlas setup - what fun with texture filtering that is).


    Then he creates an information tree structure of all the bounces. Probably while doing that he'll stream the colors of all the pixels from the HDD directly to grahics RAM, but that is not certain (my own speculation). Stage two (not speculation) is having shaders 'blit' the colors according to the tree and blend them. This will not be done by the CPU, therefore the CPU can update the world, calculate physics (Carmack lols at GPU shader physics calculations according interviews) and the process starts all over again.

    Carmack his trick is not in speeding up the voxel data search while searching data, but reducing the amount of voxel data by streaming in the first place. Then he can create the tree and find colors at breakneck speeds.

    The only way that the HDD speed can keep up (or rather the other way around) is by having multiple compressed files, that stream compressed to the CPU RAM, decompress there and at some point be recompressed (if texture/geometry tiles change) and send back to the HDD to be stored.

    Given the time it will take for id Tech 6 to finnisch (judged by the time all his previous work took), hardware will be great enough to leverage good looking Monte Carlo calculations and unlimited detail at 30-60fps.
    There was a siggraph paper by Jon Olick, but I can't access it now. Sadly, I only remember it now and never actually went through it properly in the first place. Data streaming and proper data structures, if designed properly, can also take advantage of parallel processing (tree structures are good at that).

    Comment


    • Originally posted by mirv View Post
      I should probably clarify what I meant - rasterisation vs ray tracing, hardware wise, rasterisation wins out. That's why it's being used. As a data structure for generating what the raster is applied to, however, voxels do have a lot going for them. Excellent at volumetric work, if you can get around the box effect.
      You're ofcourse correct in that triangle rendering is faster in terms of acceleration today. But if people can get voxel ray tracing working, it would ofcourse be very sexeh

      The box effect can be eliminated by blurring/texture filtering when a voxel is larger than a pixel on the screen. Like this: http://www.youtube.com/watch?v=_CCZIBDt1uM

      Another advantage is that triangles can't realy match the level of detail of millions of voxels in a single space, even with culling, traingle tiling doesn't look that nice.

      I'm wondering if he'll be using the a sparse voxel octree for ray traced rendering, or more just as an advanced LOD system for traditional rasterisation methods.
      Carmack doesn't realy care for special effects; he wants to enable designers and take work away from them. He said that in a Doom3 engine interview. The way that artists create detail in Rage (as shown by YouTube videos) is by making the detail on the fly.

      It would be great to have a good LOD culling/streaming technique for triangles, but that can't be used to create content in the world on-the-fly. Even in Crysis, the world editor is used for creating voxelized terrain on-the-fly. Crysis also has some way to smoothen all the rough voxels so they look nice while being very large.

      I got the feeling that he meant the latter, which makes a fair bit of sense. Current techniques are seeing more and more decoupling of geometry and final image, so thinking of geometry as just triangles may no longer be required (especially with the latest graphics card power).
      So you mean voxel data being traingle-ized?

      As far as I'm aware, the ET:QW megatexture was a vastly simplified version of picking out what's needed on screen - I do something similar for terrain rendering, I believe - but Rage is quite a good deal more advanced (probably uses a texture atlas setup - what fun with texture filtering that is).
      It is using one very large 'atlas' texture, as far as my knowledge about atlas texture goes. It is devided into two files. A diffuse data file and a normal map file. The file structure is broken down into tiles of 128*128 for fast and easy loading, but it's a large, single texture.

      You might be correct in that ET:QW doesn't have the 'Google maps' zooming feature for more detail. That indeed is in Rage.

      There was a siggraph paper by Jon Olick, but I can't access it now. Sadly, I only remember it now and never actually went through it properly in the first place. Data streaming and proper data structures, if designed properly, can also take advantage of parallel processing (tree structures are good at that).
      There has indeed been a whole lot of papers published on speeding up the trees.

      I found an awfull lot of graphics papers, freely available on a dutch university website here: http://graphics.cs.kuleuven.be/index.php/publications
      Realy interesting publications, even though the last onces are from 2010. Realy worth checking it out!

      PS: And here's a paper on implementing perfect ray-tracing at breakneck speeds: http://www.cs.columbia.edu/~batty/misc/ERPT-report.pdf
      PS2: An here is a video that demonstrates the speed difference with normal path tracing: http://www.youtube.com/watch?v=c7wTaW46gzA
      Last edited by V!NCENT; 17 July 2011, 04:52 AM.

      Comment


      • Originally posted by V!NCENT View Post
        You're ofcourse correct in that triangle rendering is faster in terms of acceleration today. But if people can get voxel ray tracing working, it would ofcourse be very sexeh

        The box effect can be eliminated by blurring/texture filtering when a voxel is larger than a pixel on the screen. Like this: http://www.youtube.com/watch?v=_CCZIBDt1uM
        I was watching that last night - looks like some interesting work. Not sure the current filtering applies is useful for near-focus items, but that's why people research these things. Voxel based editing is definitely very nice for terrain systems (many games used to do it that way - Commanche 3 comes to mind).

        Another advantage is that triangles can't realy match the level of detail of millions of voxels in a single space, even with culling, traingle tiling doesn't look that nice.
        Which is where LOD streaming comes into play for voxels. Triangle based model representation has (or had with current detail levels) the benefit of less memory space, and can typically be dumped onto the graphics card without the need for constant streaming updates. Memory bandwidths have increased to the point where high levels of data streaming is becoming viable however. Pity I couldn't do more of my own testing with these things (that darned day job), but I've always wanted to try voxel-based LOD streaming with Stanford's Lucy. Maybe next year.

        Carmack doesn't realy care for special effects; he wants to enable designers and take work away from them. He said that in a Doom3 engine interview. The way that artists create detail in Rage (as shown by YouTube videos) is by making the detail on the fly.

        It would be great to have a good LOD culling/streaming technique for triangles, but that can't be used to create content in the world on-the-fly. Even in Crysis, the world editor is used for creating voxelized terrain on-the-fly. Crysis also has some way to smoothen all the rough voxels so they look nice while being very large.
        I remember Carmack stressing that megatexturing was more about allowing artists to not be constrained by hardware limitations than anything revolutionary from a graphics perspective. Which is pretty awesome actually.
        Yep, agree with voxels for editing there - they have a lot going for being able to sculpt out a world.

        So you mean voxel data being traingle-ized?
        Sure, why not? Either as boxes, or as vertex points over which you can generate a mesh hull. Triangles are just easy for graphics hardware to handle, but there's absolutely no reason to treat model storage and processing the same way. Things are already moving that way with tessellation, and volumetric based editing is proven to be effective.

        It is using one very large 'atlas' texture, as far as my knowledge about atlas texture goes. It is devided into two files. A diffuse data file and a normal map file. The file structure is broken down into tiles of 128*128 for fast and easy loading, but it's a large, single texture.

        You might be correct in that ET:QW doesn't have the 'Google maps' zooming feature for more detail. That indeed is in Rage.
        I think I read somewhere that an atlas texture wasn't used - at least not in the generic sense. It's a single large image in system memory (or on the hard drive, wherever), but you stream in the local tiles as required to the video card - the point of it being used with heightmap based terrain though was that all the tiles were adjacent. You can then play with UV coords, and take advantage of texture wrapping, to move the "tile page" around, and any texture filtering is automatically handled. Use a few mip levels, combine it with UV "depth" in a fragment shader - it's really not difficult. Single pass, no alpha blending, so it's also quite fast. Downsides (and from what I've observed in ET:QW, all this happens, which is why I think I'm close to the mark) is that it can really only be done for heightmap based terrain, and doesn't handle zooming (since the high-res tiles loaded are only the ones local to the player's location).
        I'm pretty sure Rage uses the next step of full texture atlases - but the details are in sorting out which tiles are needed, and what to do when you start running out of texture atlas space. It's for this step that I can really see a voxel-based scene setup being very, very useful.

        There has indeed been a whole lot of papers published on speeding up the trees.

        I found an awfull lot of graphics papers, freely available on a dutch university website here: http://graphics.cs.kuleuven.be/index.php/publications
        Realy interesting publications, even though the last onces are from 2010. Realy worth checking it out!

        PS: And here's a paper on implementing perfect ray-tracing at breakneck speeds: http://www.cs.columbia.edu/~batty/misc/ERPT-report.pdf
        Cheers for the links.

        Comment


        • Originally posted by V!NCENT View Post
          You're ofcourse correct in that triangle rendering is faster in terms of acceleration today. But if people can get voxel ray tracing working, it would ofcourse be very sexeh
          Still going on about real time ray tracing? You do realize that for every hardware advance that makes ray tracing more feasible for real time applications, rasterization speeds increase as well with comparable results, so other than geek street cred why would anyone use it?

          Comment


          • Perfect shadows and reflections in every case, without tricks?

            Comment


            • Originally posted by yogi_berra View Post
              Still going on about real time ray tracing? You do realize that for every hardware advance that makes ray tracing more feasible for real time applications, rasterization speeds increase as well with comparable results, so other than geek street cred why would anyone use it?
              Well, no harm in research. Much of the discussion about what Carmack is doing, he had looked into a long time ago, but only recently was it fully viable. Voxels went out of favour, and are coming back (typically for editing purposes) - they might be good for ray tracing, but are good at other things too (which I've noted previously).
              So there are uses for it, even if it likely won't be used in games directly anytime soon.

              Comment


              • Originally posted by curaga View Post
                Perfect shadows and reflections in every case, without tricks?
                Real shadows aren't perfect.

                Rasterized reflections are easy without tricks, did you mean refractions which are slightly more difficult?

                Comment


                • Originally posted by mirv View Post
                  Voxels went out of favour, and are coming back (typically for editing purposes) - they might be good for ray tracing, but are good at other things too (which I've noted previously).
                  So there are uses for it, even if it likely won't be used in games directly anytime soon.
                  Unlimited geometry and decent frame rates on average hardware is interesting. Raytracing in real time, not so much.

                  Comment


                  • Originally posted by yogi_berra View Post
                    Real shadows aren't perfect.

                    Rasterized reflections are easy without tricks, did you mean refractions which are slightly more difficult?
                    No, I meant reflections. Consider many mirroring objects, the raster tricks can usually only handle one level of reflection.

                    Comment


                    • Originally posted by yogi_berra View Post
                      You do realize that for every hardware advance that makes ray tracing more feasible for real time applications, rasterization speeds increase as well with comparable results, so other than geek street cred why would anyone use it?
                      Because:

                      Comment

                      Working...
                      X