Announcement

Collapse
No announcement yet.

raytracing vs rasterisation

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by L33F3R View Post
    why the hell are we using x86 cpu's anyways?!?!
    Feel free to go spend 20 thousand bucks for a new desktop

    Comment


    • #12
      Originally posted by mirv View Post
      That would allow more than simply triangles to be processed, and perhaps even more of a "description" of objects than object data (think svg for images).
      NURBS? I mean if you don't take that format then you'll just end up without a single modelling tool for your renderer/engine
      PS: Can't wait for the mathematical calculate intersect collision detection pr0n

      Comment


      • #13
        Originally posted by V!NCENT View Post
        NURBS? I mean if you don't take that format then you'll just end up without a single modelling tool for your renderer/engine
        PS: Can't wait for the mathematical calculate intersect collision detection pr0n
        Basically, yes, but not exclusively. Professional modelling tools already do this kind of thing anyway, so it's not that far of a stretch to have geometry generation done in real time.

        Comment


        • #14
          Originally posted by mirv View Post
          Basically, yes, but not exclusively. Professional modelling tools already do this kind of thing anyway, so it's not that far of a stretch to have geometry generation done in real time.
          Geometry generation! I thought about that too! Procedural though; give each objects some data and enhance it's surface.

          For example: a wooden tabel
          The wooden table has basic geometry and texture but then when a ray is about to hit the table some data linked to that model is read. In this case a 'carved surface'. So the surface of the table gets to become dynamicaly detailed, depending on how far away the camera is. This is my raytracing counterpart to light shaders and height maps.

          This also reduces a lot of work for game developpers and time saving and simplicity is key to open source games because people tend to be less skilled and have less time.

          The model will have additional data such as 'reflective', 'glass', etc so ray data can be modified (color correction, HDR, etc) and directions corrected.

          This also adresses next-gen content because games seem to become more and more expensive over time because it requires more- and profesional artists.

          The entire idea list doesn't end here at all! All the 'work' I am putting into it getting it all together in one picture. Code architecture in combination with threading needs to be perfect, extendible and also done in such a way that I can create a lot of dummy code so I can at least get some working stuff out of the door with not too much effort. This is to avoid vaporware nightmares and blocking the ability for future ideas like DMM (http://en.wikipedia.org/wiki/Digital_Molecular_Matter) for example. And dynamicaly add features when CPU power increases for more demanding stuff. When everything is in place and so abstract that it is future proof, then I will start the coding (vision? ). Untill then I will only code some simple tast cases while I learn to program for Haiku.

          Yes; I will do this NASA style: http://www.fastcompany.com/node/28121/print
          PS: found some cool video about realtime DDM: http://www.youtube.com/watch?v=YRMlt...eature=related
          Last edited by V!NCENT; 08 October 2009, 05:54 AM.

          Comment


          • #15
            Originally posted by RealNC View Post
            Feel free to go spend 20 thousand bucks for a new desktop
            I was actually looking at the IBM z10 for my daily computing needs.

            Last edited by L33F3R; 07 October 2009, 05:07 PM.

            Comment


            • #16
              The thing is, there are a lot more linux users running those kinds of desktops than windows users (including tech-adept high end gamers). Yet, even 32-core x86 is nothing compared to the more exotic and expensive sets.

              And on topic, good luck V!ncent on your goal. It sounds like something very nice so far.

              Comment


              • #17
                Originally posted by curaga View Post
                And on topic, good luck V!ncent on your goal. It sounds like something very nice so far.
                Thanks Although it only exists on paper for now :P

                Is there is anybody interested in joining the thinktank? :P Maybe setting up a wiki or something?

                Comment


                • #18
                  interested? yes.
                  time? no.
                  Sadly I've got about 1, maybe 2 hours a day to spend on hobby project programming, and they're already being used.

                  Comment


                  • #19
                    Originally posted by V!NCENT View Post
                    Geometry generation! I thought about that too! Procedural though; give each objects some data and enhance it's surface.

                    For example: a wooden tabel
                    The wooden table has basic geometry and texture but then when a ray is about to hit the table some data linked to that model is read. In this case a 'carved surface'. So the surface of the table gets to become dynamicaly detailed, depending on how far away the camera is. This is my raytracing counterpart to light shaders and height maps.

                    This also reduces a lot of work for game developpers and time saving and simplicity is key to open source games because people tend to be less skilled and have less time.

                    The model will have additional data such as 'reflective', 'glass', etc so ray data can be modified (color correction, HDR, etc) and directions corrected.

                    This also adresses next-gen content because games seem to become more and more expensive over time because it requires more- and profesional artists.

                    The entire idea list doesn't end here at all! All the 'work' I am putting into it getting it all together in one picture. Code architecture in combination with threading needs to be perfect, extendible and also done in such a way that I can create a lot of dummy code so I can at least get some working stuff out of the door with not too much effort. This is to avoid vaporware nightmares and blocking the ability for future ideas like DMM (http://en.wikipedia.org/wiki/Digital_Molecular_Matter) for example. And dynamicaly add features when CPU power increases for more demanding stuff. When everything is in place and so abstract that it is future proof, then I will start the coding (vision? ). Untill then I will only code some simple tast cases while I learn to program for Haiku.

                    Yes; I will do this NASA style: http://www.fastcompany.com/node/28121/print
                    PS: found some cool video about realtime DDM: http://www.youtube.com/watch?v=YRMlt...eature=related
                    That sounds like it can be achieved with tesselation (found in DirectX 11 and also in a slightly different form on DirectX 10.1 cards from AMD), which provides dynamically detailed surfaces (possibly depending on the range to the object, or the power of the hardware).

                    I don't think pure ray tracing will replace rasterization. A hybrid will be much more feasible I guess. Why? Rasterization is basically using cheap tricks to let something look nice and shiny. Using cheap tricks will always be cheaper than using physically correct rasterization.

                    Even if todays rasterization effects can be achieved with hardware ray-tracing in three years: think about which effects can be achieved with rasterization at that point. My point is: for the next couple of years rasterization will always be ahead on ray-tracing because it just doesn't need that much computation power.

                    On the other hand you will see a trend that will go like this: the cheap tricks used in rasterization will grow ever more expansive to look good. There will probably be some point at which the advanced rasterization tricks will be equally expansive to compute as the ray-tracing. At that point it gets interesting to start using ray-tracing.

                    For some effects this turn over point will be reached earlier, for some it will be reached later. For others maybe it will never be reached. So... personally I expect we will see more hybrids in the coming years.

                    Comment


                    • #20
                      Rasterization is basically using cheap tricks to let something look nice and shiny. Using cheap tricks will always be cheaper than using physically correct rasterization.
                      The advantage of raytracing is that it scales extremely well with cores up to the point that it will very soon surpass the power of traingle rendering on graphics cards. Raytracing is not only easyer and faster to code (less expensive in terms of money, time and required skill) but it also presents better graphics that surpass triangle rendering by far and graphics are insanely popular these days.

                      PS: Tesselation is repetitive and therefore boring. It also doesnt adress the problems of textures in the first place: lack of power to calculate surface detail.
                      PS2: Tesselation also comsumes more time to design stuff that could be better spend elsewhere
                      Last edited by V!NCENT; 14 October 2009, 09:19 AM.

                      Comment

                      Working...
                      X