Announcement

Collapse
No announcement yet.

Linux and Mac are screwed

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by V!NCENT View Post
    The latest one came around after Vista was released, because NT6.x is in it as well.


    There four things I have to say about that:
    -New research keeps going, ERPT combines Monte Carlo with Metropolis Light Transport (so it's fully perfect) and creates good visible frames much faster. There are ofcourse some artifacts in the first few updates, but filters can fix that. There are papers on that too. Research is actually going crazy still.
    -Atomontage engine shows that you can use voxels on GPU's by converting them to triangles first. This is real-time.
    -Intel research shows that because of the lower screen estate, you can get ray tracing there more easily because of fewer pixels that are not realy that awefull on small screens.
    -Shaders units.
    I should probably clarify what I meant - rasterisation vs ray tracing, hardware wise, rasterisation wins out. That's why it's being used. As a data structure for generating what the raster is applied to, however, voxels do have a lot going for them. Excellent at volumetric work, if you can get around the box effect.

    John Carmack is realy smart, yet all his tricks rely on the same principle; the trick is in the eye of the beholder. For example side scrolling wasn't possible on PC's because it didn't have the power to update all the pixels, so with Commander Keen he proved that it was possible to only update the pixels that you need to see. With Wolf3D all the way to the Megatexture. He eliminates calculations rather than speed up algorithms and code. Like with lossy audio codecs; strip away what can't be heared/seen easily, or at all.
    He'll either eliminate things, or use modern hardware to do things previously not permitted, or just look at things a little differently. Megatexture I'll get on to in a moment...


    With ray tracing his trick is indeed culling. Very smart culling like with sparse voxel octrees; reduce geometry to the amount of voxels that matches the amount of pixels that it takes up on the screen. That way you get perfect geometry everywhere, but with the speed of the Wolfenstein3D engine path tracing, but then in 3D.

    This is possible with the megatexture technique. His megatexture uses pixels for textures. Voxels are 3D pixels, so he applies that kind of streaming to get awesome image quality, detail and diversity, while limiting the amount of traceable rays.
    I'm wondering if he'll be using the a sparse voxel octree for ray traced rendering, or more just as an advanced LOD system for traditional rasterisation methods. I got the feeling that he meant the latter, which makes a fair bit of sense. Current techniques are seeing more and more decoupling of geometry and final image, so thinking of geometry as just triangles may no longer be required (especially with the latest graphics card power). I can see it fitting in well with the latest megatexture technology. Just a case of figuring out what detail levels are need on the screen, and making sure the graphics card has them loaded.
    As far as I'm aware, the ET:QW megatexture was a vastly simplified version of picking out what's needed on screen - I do something similar for terrain rendering, I believe - but Rage is quite a good deal more advanced (probably uses a texture atlas setup - what fun with texture filtering that is).


    Then he creates an information tree structure of all the bounces. Probably while doing that he'll stream the colors of all the pixels from the HDD directly to grahics RAM, but that is not certain (my own speculation). Stage two (not speculation) is having shaders 'blit' the colors according to the tree and blend them. This will not be done by the CPU, therefore the CPU can update the world, calculate physics (Carmack lols at GPU shader physics calculations according interviews) and the process starts all over again.

    Carmack his trick is not in speeding up the voxel data search while searching data, but reducing the amount of voxel data by streaming in the first place. Then he can create the tree and find colors at breakneck speeds.

    The only way that the HDD speed can keep up (or rather the other way around) is by having multiple compressed files, that stream compressed to the CPU RAM, decompress there and at some point be recompressed (if texture/geometry tiles change) and send back to the HDD to be stored.

    Given the time it will take for id Tech 6 to finnisch (judged by the time all his previous work took), hardware will be great enough to leverage good looking Monte Carlo calculations and unlimited detail at 30-60fps.
    There was a siggraph paper by Jon Olick, but I can't access it now. Sadly, I only remember it now and never actually went through it properly in the first place. Data streaming and proper data structures, if designed properly, can also take advantage of parallel processing (tree structures are good at that).

    Comment


    • Originally posted by mirv View Post
      I should probably clarify what I meant - rasterisation vs ray tracing, hardware wise, rasterisation wins out. That's why it's being used. As a data structure for generating what the raster is applied to, however, voxels do have a lot going for them. Excellent at volumetric work, if you can get around the box effect.
      You're ofcourse correct in that triangle rendering is faster in terms of acceleration today. But if people can get voxel ray tracing working, it would ofcourse be very sexeh

      The box effect can be eliminated by blurring/texture filtering when a voxel is larger than a pixel on the screen. Like this: http://www.youtube.com/watch?v=_CCZIBDt1uM

      Another advantage is that triangles can't realy match the level of detail of millions of voxels in a single space, even with culling, traingle tiling doesn't look that nice.

      I'm wondering if he'll be using the a sparse voxel octree for ray traced rendering, or more just as an advanced LOD system for traditional rasterisation methods.
      Carmack doesn't realy care for special effects; he wants to enable designers and take work away from them. He said that in a Doom3 engine interview. The way that artists create detail in Rage (as shown by YouTube videos) is by making the detail on the fly.

      It would be great to have a good LOD culling/streaming technique for triangles, but that can't be used to create content in the world on-the-fly. Even in Crysis, the world editor is used for creating voxelized terrain on-the-fly. Crysis also has some way to smoothen all the rough voxels so they look nice while being very large.

      I got the feeling that he meant the latter, which makes a fair bit of sense. Current techniques are seeing more and more decoupling of geometry and final image, so thinking of geometry as just triangles may no longer be required (especially with the latest graphics card power).
      So you mean voxel data being traingle-ized?

      As far as I'm aware, the ET:QW megatexture was a vastly simplified version of picking out what's needed on screen - I do something similar for terrain rendering, I believe - but Rage is quite a good deal more advanced (probably uses a texture atlas setup - what fun with texture filtering that is).
      It is using one very large 'atlas' texture, as far as my knowledge about atlas texture goes. It is devided into two files. A diffuse data file and a normal map file. The file structure is broken down into tiles of 128*128 for fast and easy loading, but it's a large, single texture.

      You might be correct in that ET:QW doesn't have the 'Google maps' zooming feature for more detail. That indeed is in Rage.

      There was a siggraph paper by Jon Olick, but I can't access it now. Sadly, I only remember it now and never actually went through it properly in the first place. Data streaming and proper data structures, if designed properly, can also take advantage of parallel processing (tree structures are good at that).
      There has indeed been a whole lot of papers published on speeding up the trees.

      I found an awfull lot of graphics papers, freely available on a dutch university website here: http://graphics.cs.kuleuven.be/index.php/publications
      Realy interesting publications, even though the last onces are from 2010. Realy worth checking it out!

      PS: And here's a paper on implementing perfect ray-tracing at breakneck speeds: http://www.cs.columbia.edu/~batty/misc/ERPT-report.pdf
      PS2: An here is a video that demonstrates the speed difference with normal path tracing: http://www.youtube.com/watch?v=c7wTaW46gzA
      Last edited by V!NCENT; 07-17-2011, 04:52 AM.

      Comment


      • Originally posted by V!NCENT View Post
        You're ofcourse correct in that triangle rendering is faster in terms of acceleration today. But if people can get voxel ray tracing working, it would ofcourse be very sexeh

        The box effect can be eliminated by blurring/texture filtering when a voxel is larger than a pixel on the screen. Like this: http://www.youtube.com/watch?v=_CCZIBDt1uM
        I was watching that last night - looks like some interesting work. Not sure the current filtering applies is useful for near-focus items, but that's why people research these things. Voxel based editing is definitely very nice for terrain systems (many games used to do it that way - Commanche 3 comes to mind).

        Another advantage is that triangles can't realy match the level of detail of millions of voxels in a single space, even with culling, traingle tiling doesn't look that nice.
        Which is where LOD streaming comes into play for voxels. Triangle based model representation has (or had with current detail levels) the benefit of less memory space, and can typically be dumped onto the graphics card without the need for constant streaming updates. Memory bandwidths have increased to the point where high levels of data streaming is becoming viable however. Pity I couldn't do more of my own testing with these things (that darned day job), but I've always wanted to try voxel-based LOD streaming with Stanford's Lucy. Maybe next year.

        Carmack doesn't realy care for special effects; he wants to enable designers and take work away from them. He said that in a Doom3 engine interview. The way that artists create detail in Rage (as shown by YouTube videos) is by making the detail on the fly.

        It would be great to have a good LOD culling/streaming technique for triangles, but that can't be used to create content in the world on-the-fly. Even in Crysis, the world editor is used for creating voxelized terrain on-the-fly. Crysis also has some way to smoothen all the rough voxels so they look nice while being very large.
        I remember Carmack stressing that megatexturing was more about allowing artists to not be constrained by hardware limitations than anything revolutionary from a graphics perspective. Which is pretty awesome actually.
        Yep, agree with voxels for editing there - they have a lot going for being able to sculpt out a world.

        So you mean voxel data being traingle-ized?
        Sure, why not? Either as boxes, or as vertex points over which you can generate a mesh hull. Triangles are just easy for graphics hardware to handle, but there's absolutely no reason to treat model storage and processing the same way. Things are already moving that way with tessellation, and volumetric based editing is proven to be effective.

        It is using one very large 'atlas' texture, as far as my knowledge about atlas texture goes. It is devided into two files. A diffuse data file and a normal map file. The file structure is broken down into tiles of 128*128 for fast and easy loading, but it's a large, single texture.

        You might be correct in that ET:QW doesn't have the 'Google maps' zooming feature for more detail. That indeed is in Rage.
        I think I read somewhere that an atlas texture wasn't used - at least not in the generic sense. It's a single large image in system memory (or on the hard drive, wherever), but you stream in the local tiles as required to the video card - the point of it being used with heightmap based terrain though was that all the tiles were adjacent. You can then play with UV coords, and take advantage of texture wrapping, to move the "tile page" around, and any texture filtering is automatically handled. Use a few mip levels, combine it with UV "depth" in a fragment shader - it's really not difficult. Single pass, no alpha blending, so it's also quite fast. Downsides (and from what I've observed in ET:QW, all this happens, which is why I think I'm close to the mark) is that it can really only be done for heightmap based terrain, and doesn't handle zooming (since the high-res tiles loaded are only the ones local to the player's location).
        I'm pretty sure Rage uses the next step of full texture atlases - but the details are in sorting out which tiles are needed, and what to do when you start running out of texture atlas space. It's for this step that I can really see a voxel-based scene setup being very, very useful.

        There has indeed been a whole lot of papers published on speeding up the trees.

        I found an awfull lot of graphics papers, freely available on a dutch university website here: http://graphics.cs.kuleuven.be/index.php/publications
        Realy interesting publications, even though the last onces are from 2010. Realy worth checking it out!

        PS: And here's a paper on implementing perfect ray-tracing at breakneck speeds: http://www.cs.columbia.edu/~batty/misc/ERPT-report.pdf
        Cheers for the links.

        Comment


        • Originally posted by V!NCENT View Post
          You're ofcourse correct in that triangle rendering is faster in terms of acceleration today. But if people can get voxel ray tracing working, it would ofcourse be very sexeh
          Still going on about real time ray tracing? You do realize that for every hardware advance that makes ray tracing more feasible for real time applications, rasterization speeds increase as well with comparable results, so other than geek street cred why would anyone use it?

          Comment


          • Perfect shadows and reflections in every case, without tricks?

            Comment


            • Originally posted by yogi_berra View Post
              Still going on about real time ray tracing? You do realize that for every hardware advance that makes ray tracing more feasible for real time applications, rasterization speeds increase as well with comparable results, so other than geek street cred why would anyone use it?
              Well, no harm in research. Much of the discussion about what Carmack is doing, he had looked into a long time ago, but only recently was it fully viable. Voxels went out of favour, and are coming back (typically for editing purposes) - they might be good for ray tracing, but are good at other things too (which I've noted previously).
              So there are uses for it, even if it likely won't be used in games directly anytime soon.

              Comment


              • Originally posted by curaga View Post
                Perfect shadows and reflections in every case, without tricks?
                Real shadows aren't perfect.

                Rasterized reflections are easy without tricks, did you mean refractions which are slightly more difficult?

                Comment


                • Originally posted by mirv View Post
                  Voxels went out of favour, and are coming back (typically for editing purposes) - they might be good for ray tracing, but are good at other things too (which I've noted previously).
                  So there are uses for it, even if it likely won't be used in games directly anytime soon.
                  Unlimited geometry and decent frame rates on average hardware is interesting. Raytracing in real time, not so much.

                  Comment


                  • Originally posted by yogi_berra View Post
                    Real shadows aren't perfect.

                    Rasterized reflections are easy without tricks, did you mean refractions which are slightly more difficult?
                    No, I meant reflections. Consider many mirroring objects, the raster tricks can usually only handle one level of reflection.

                    Comment


                    • Originally posted by yogi_berra View Post
                      You do realize that for every hardware advance that makes ray tracing more feasible for real time applications, rasterization speeds increase as well with comparable results, so other than geek street cred why would anyone use it?
                      Because:

                      Comment


                      • Originally posted by V!NCENT View Post
                        Because:
                        You're not helping your argument as that can be done without raytracing (even the color-bleeding).

                        Originally posted by curaga
                        No, I meant reflections.
                        You really should go with refractions, because it is one of the few areas rasterization falls flat.

                        Comment


                        • Originally posted by yogi_berra View Post
                          You're not helping your argument as that can be done without raytracing (even the color-bleeding).
                          Dynamic and very rich looking atmospheres, without the artist having to apply all kinds of hack all around the place.

                          You see, yes you can have all the features that ray tracing has, with traditional rendering techniques, but they are not perfect. They are not pretty. They are complex. They are ugly.

                          You see at a certain point you're going to face a serious problem. Not only is the ray tracing algorythm going to be mathemathically faster at a certian point, but once you shoot for higher image quality, where are the ray intersection effects going to be with traditional rendering? How about (let's get absurd; ) you have environmental light, comming in through collored glas, shooting color bleeding in a swimming pool.

                          You just can't get the ambience, the feeling of a truely correct image if you do traditional rendering. You just can't.

                          Now if you would voxelize the data, you can get all kinds of sick chemical reaction, weather effects, etc. If you're going to render that with traditional rendering (triangle-ized or not), you're simply not getting the right feel.

                          ---

                          Let's turn the tables. What if (5 years from now or so) you actually can get fully correct ray tracing with voxel data (and all the fun features it can deliver), streaming as unlimited detail? Why would anyone still want to apply traditional rendering techniques? What advantage does it have when physical light is the limit in graphics?

                          We are going to hit that bar, sooner or later. There are algorithms out there that are way faster.

                          If we can reach the real-time bar. If it's only a matter of time. If it's technologicaly sexy and opticaly correct. If it's enabling artists to not give a shit about the graphics and fully focus on the gameplay and environment... Why the fsck not?

                          You really should go with refractions, because it is one of the few areas rasterization falls flat.
                          Seriously, not even close... Even reflections (indirect light) falls flatout on its face.
                          Last edited by V!NCENT; 07-20-2011, 01:23 PM.

                          Comment


                          • Originally posted by V!NCENT View Post
                            Dynamic and very rich looking atmospheres, without the artist having to apply all kinds of hack all around the place.
                            Yeah, that is a load of something. Adding additional render passes is not "applying all kinds of hacks." Nice hyperbole, but not at all accurate.

                            You see, yes you can have all the features that ray tracing has, with traditional rendering techniques, but they are not perfect. They are not pretty. They are complex. They are ugly.
                            You mean an artist might actually have to learn their tools? For shame. Again it is a nice hyperbole, but not at all accurate. At best you are arguing that you personally cannot create those kinds of effects, which may be true.

                            You see at a certain point you're going to face a serious problem. Not only is the ray tracing algorythm going to be mathemathically faster at a certian point, but once you shoot for higher image quality, where are the ray intersection effects going to be with traditional rendering?
                            Mathematically faster? What does that even mean? Did you mean computationally faster? Remember, as the CPU power increases, rasterization time decreases as well. You haven't shown otherwise.

                            How about (let's get absurd; ) you have environmental light, comming in through collored glas, shooting color bleeding in a swimming pool.

                            You just can't get the ambience, the feeling of a truely correct image if you do traditional rendering. You just can't.
                            Nonsense. During pre-production of "Finding Nemo" Pixar staff was tasked with recreating a video of a blue whale swimming through the ocean, they recreated it perfectly without raytracing in less than a week. So perfectly in fact that when shown side by side you can't tell which is the rendering and which is the actual video.

                            You can see it yourself by watching the additional features on the DVD.

                            Let's turn the tables. What if (5 years from now or so) you actually can get fully correct ray tracing with voxel data (and all the fun features it can deliver), streaming as unlimited detail? Why would anyone still want to apply traditional rendering techniques? What advantage does it have when physical light is the limit in graphics?
                            What if five years from now I suddenly learn to shit gold bricks? I'll be rich but I'll need a hefty laxative.

                            Seriously though, this is a 'what if' that people have been talking about for ever, if it ever happens it'll be a very cold day in Hell.

                            If it's enabling artists to not give a shit about the graphics and fully focus on the gameplay and environment... Why the fsck not?
                            Yeah right, the more realistic projects become the more time artists have to spend on them to avoid putting people off. Rent 'Beowulf' and you'll see what a rush job will do when you are attempting to replicate anything realistically.

                            Even reflections (indirect light) falls flatout on its face.
                            No they don't, maybe in your personal experience, but they don't for most people.

                            Comment


                            • Originally posted by yogi_berra View Post
                              Yeah, that is a load of something. Adding additional render passes is not "applying all kinds of hacks." Nice hyperbole, but not at all accurate.
                              Well what you're doing is applying drawing tricks. If there is a light shining from the back of your view, oopsy... You'll have to apply pre-rendered ray trace texture. But then your scene is not dynamic. And then when you shine a lamp on a shadowed surface that is actually a texture, then you still see the shadow. Oopsy again.

                              Mathematically faster? What does that even mean? Did you mean computationally faster? Remember, as the CPU power increases, rasterization time decreases as well. You haven't shown otherwise.
                              There is a point in terms of triangle count that matches the amount of ray calculations needed to be done to get shadows. If you imagine a giant forest with even more detail than Crysis (because that's supposed to be the edge of traingle pushing, for which the game engine is being praised to be able to even run), you're closing in on that limit where ray tracing shadows actually becomes faster. It is the very reason that a lot of new games actually use the blocky ray trace shadows.

                              Nonsense. During pre-production of "Finding Nemo" Pixar staff was tasked with recreating a video of a blue whale swimming through the ocean, they recreated it perfectly without raytracing in less than a week. So perfectly in fact that when shown side by side you can't tell which is the rendering and which is the actual video.
                              Did they show a rendered image of that with the camera being underwater and looking upwards to the surface of the water? Because perfect refraction isn't going to do you much good if it doesn't exactly mimick the fisheye lense effect. For example if you look up from being deep enough underwater in a swimming pool, you'll notice that not the entire surface of the water actually looks transparant.

                              You can see it yourself by watching the additional features on the DVD.
                              I don't have it.

                              What if five years from now I suddenly learn to shit gold bricks? I'll be rich but I'll need a hefty laxative.
                              I thought you weren't going to take that literaly, since it's obvious that Moore's law will meet increasingly faster algorithms up to the point that you no longer need 64 threads to calculate a perfect image.

                              Yeah right, the more realistic projects become the more time artists have to spend on them to avoid putting people off. Rent 'Beowulf' and you'll see what a rush job will do when you are attempting to replicate anything realistically.
                              For environments, artists can use generators for voxels. Replicating an entire dessert will be a very easy task. Creating entire islands with random number generators for games like Ace Combat will give you the entire terrain. All you need to do is scrap some trees and put it a building or two or a nice bridge and you're done. Airplanes, a runway and a hanger will be all that's left to do for the artists. Even smoke from rockets will be physically calculated. No more particles even...

                              No they don't, maybe in your personal experience, but they don't for most people.
                              Show me a good caustic and we'll talk...

                              Comment


                              • Originally posted by V!NCENT View Post
                                Well what you're doing is applying drawing tricks. If there is a light shining from the back of your view, oopsy... You'll have to apply pre-rendered ray trace texture.
                                Nonsense. It is possible to have lights that do not create shadows. Try again.

                                There is a point in terms of triangle count that matches the amount of ray calculations needed to be done to get shadows. If you imagine a giant forest with even more detail than Crysis (because that's supposed to be the edge of traingle pushing, for which the game engine is being praised to be able to even run), you're closing in on that limit where ray tracing shadows actually becomes faster. It is the very reason that a lot of new games actually use the blocky ray trace shadows.
                                Citation needed because all of the blocky shadows I've seen are from piss poor shadow mapping. (Crysis isn't the limit of triangle pushing)


                                Did they show a rendered image of that with the camera being underwater and looking upwards to the surface of the water? Because perfect refraction isn't going to do you much good if it doesn't exactly mimick the fisheye lense effect. For example if you look up from being deep enough underwater in a swimming pool, you'll notice that not the entire surface of the water actually looks transparant.


                                I don't have it.
                                Rent it.


                                [QUOTE]For environments, artists can use generators for voxels. Replicating an entire dessert will be a very easy task. Creating entire islands with random number generators for games like Ace Combat will give you the entire terrain. All you need to do is scrap some trees and put it a building or two or a nice bridge and you're done. Airplanes, a runway and a hanger will be all that's left to do for the artists. Even smoke from rockets will be physically calculated. No more particles even...[QUOTE]

                                Yeah, sure, and then game studios can be drop their art staff to one man in a basement. I'll believe it when I see it.

                                Show me a good caustic and we'll talk...
                                Again, rent "Finding Nemo."

                                Comment

                                Working...
                                X