Originally posted by L33F3R
View Post
Announcement
Collapse
No announcement yet.
raytracing vs rasterisation
Collapse
X
-
Originally posted by mirv View PostThat would allow more than simply triangles to be processed, and perhaps even more of a "description" of objects than object data (think svg for images).
PS: Can't wait for the mathematical calculate intersect collision detection pr0n
Comment
-
Originally posted by V!NCENT View PostNURBS? I mean if you don't take that format then you'll just end up without a single modelling tool for your renderer/engine
PS: Can't wait for the mathematical calculate intersect collision detection pr0n
Comment
-
Originally posted by mirv View PostBasically, yes, but not exclusively. Professional modelling tools already do this kind of thing anyway, so it's not that far of a stretch to have geometry generation done in real time.
For example: a wooden tabel
The wooden table has basic geometry and texture but then when a ray is about to hit the table some data linked to that model is read. In this case a 'carved surface'. So the surface of the table gets to become dynamicaly detailed, depending on how far away the camera is. This is my raytracing counterpart to light shaders and height maps.
This also reduces a lot of work for game developpers and time saving and simplicity is key to open source games because people tend to be less skilled and have less time.
The model will have additional data such as 'reflective', 'glass', etc so ray data can be modified (color correction, HDR, etc) and directions corrected.
This also adresses next-gen content because games seem to become more and more expensive over time because it requires more- and profesional artists.
The entire idea list doesn't end here at all! All the 'work' I am putting into it getting it all together in one picture. Code architecture in combination with threading needs to be perfect, extendible and also done in such a way that I can create a lot of dummy code so I can at least get some working stuff out of the door with not too much effort. This is to avoid vaporware nightmares and blocking the ability for future ideas like DMM (http://en.wikipedia.org/wiki/Digital_Molecular_Matter) for example. And dynamicaly add features when CPU power increases for more demanding stuff. When everything is in place and so abstract that it is future proof, then I will start the coding (vision? ). Untill then I will only code some simple tast cases while I learn to program for Haiku.
Yes; I will do this NASA style: http://www.fastcompany.com/node/28121/print
PS: found some cool video about realtime DDM: http://www.youtube.com/watch?v=YRMlt...eature=relatedLast edited by V!NCENT; 08 October 2009, 05:54 AM.
Comment
-
The thing is, there are a lot more linux users running those kinds of desktops than windows users (including tech-adept high end gamers). Yet, even 32-core x86 is nothing compared to the more exotic and expensive sets.
And on topic, good luck V!ncent on your goal. It sounds like something very nice so far.
Comment
-
-
Originally posted by V!NCENT View PostGeometry generation! I thought about that too! Procedural though; give each objects some data and enhance it's surface.
For example: a wooden tabel
The wooden table has basic geometry and texture but then when a ray is about to hit the table some data linked to that model is read. In this case a 'carved surface'. So the surface of the table gets to become dynamicaly detailed, depending on how far away the camera is. This is my raytracing counterpart to light shaders and height maps.
This also reduces a lot of work for game developpers and time saving and simplicity is key to open source games because people tend to be less skilled and have less time.
The model will have additional data such as 'reflective', 'glass', etc so ray data can be modified (color correction, HDR, etc) and directions corrected.
This also adresses next-gen content because games seem to become more and more expensive over time because it requires more- and profesional artists.
The entire idea list doesn't end here at all! All the 'work' I am putting into it getting it all together in one picture. Code architecture in combination with threading needs to be perfect, extendible and also done in such a way that I can create a lot of dummy code so I can at least get some working stuff out of the door with not too much effort. This is to avoid vaporware nightmares and blocking the ability for future ideas like DMM (http://en.wikipedia.org/wiki/Digital_Molecular_Matter) for example. And dynamicaly add features when CPU power increases for more demanding stuff. When everything is in place and so abstract that it is future proof, then I will start the coding (vision? ). Untill then I will only code some simple tast cases while I learn to program for Haiku.
Yes; I will do this NASA style: http://www.fastcompany.com/node/28121/print
PS: found some cool video about realtime DDM: http://www.youtube.com/watch?v=YRMlt...eature=related
I don't think pure ray tracing will replace rasterization. A hybrid will be much more feasible I guess. Why? Rasterization is basically using cheap tricks to let something look nice and shiny. Using cheap tricks will always be cheaper than using physically correct rasterization.
Even if todays rasterization effects can be achieved with hardware ray-tracing in three years: think about which effects can be achieved with rasterization at that point. My point is: for the next couple of years rasterization will always be ahead on ray-tracing because it just doesn't need that much computation power.
On the other hand you will see a trend that will go like this: the cheap tricks used in rasterization will grow ever more expansive to look good. There will probably be some point at which the advanced rasterization tricks will be equally expansive to compute as the ray-tracing. At that point it gets interesting to start using ray-tracing.
For some effects this turn over point will be reached earlier, for some it will be reached later. For others maybe it will never be reached. So... personally I expect we will see more hybrids in the coming years.
Comment
-
Rasterization is basically using cheap tricks to let something look nice and shiny. Using cheap tricks will always be cheaper than using physically correct rasterization.
PS: Tesselation is repetitive and therefore boring. It also doesnt adress the problems of textures in the first place: lack of power to calculate surface detail.
PS2: Tesselation also comsumes more time to design stuff that could be better spend elsewhereLast edited by V!NCENT; 14 October 2009, 09:19 AM.
Comment
Comment