Oops, I meant to say isn't really. Stupid lack of post editing.
Announcement
Collapse
No announcement yet.
The VDrift Racing Game Continues Speeding Up
Collapse
X
-
Originally posted by marek View PostI think it's pretty clear from the list of implemented techniques that this game needs float textures for most of its graphics awesomeness. This is a huge problem in the open source driver stack since it's patented.
Comment
-
Originally posted by kbios View PostI'm really worried this will become more and more common as games get more sophisticated and mesa progresses. Will it become so common to kill (make useless) the OS graphic stack? I hope not.
I mean realy... who needs floating points? Remove that comma! Change the meaning of the color values and see how that leads to much more computation speed.
Realy a child could have figured that out...
Comment
-
30fps means 33ms latency, that sucks. Good for a movie, bad for a game.
Originally posted by kbios View PostI'm really worried this will become more and more common as games get more sophisticated and mesa progresses. Will it become so common to kill (make useless) the OS graphic stack? I hope not.
VINCENT> Feel free to continue writing random stuff.
Comment
-
Originally posted by marek View Post30fps means 33ms latency, that sucks. Good for a movie, bad for a game.
VINCENT> Feel free to continue writing random stuff.
Comment
-
OK so in order to not sound like shouting random crap, I've dug into this.
Basicaly I already gave you a workaround for floating point textures, but floating point textures themselves are not patented. Funny, eh? What you mean is the algorith for the shadow mapping.
We are talking about US Patent 7450123.
This patent describes a few things.
1. use of layers.
2. the algorith for calculating how much light shines on a pixel by use of the z-buffer
3. placement in the rendering pipeline.
Now the workaround:
1. Use of layers.
There are two layers:
-the texture layer (oh yeah fscking rly?)
-the depth layer
Why not mix those layers into one layer (preprocessing)?
For example after each pixel color value comes the depth value.
2. the algorith
The algorithm puts the depth layer between the texture layer and the light source. The Z-buffer-values and Z'-buffer-values for each pixel in the texture layer is calculated to determine how much light shines on each pixel color value in the texture layer. After that the color values are 'corrected'.
So why not calculate the angle of the light source that shines on a certain 'deep' pixel and the further the angle to the light source, the darker the pixel will become?
This will eliminate the problem with doing anything with the z-buffer because the further away the light source, the less steep the angle will be. Ofcourse then afterwards another step can be taken to calculate the intensity of the brightness of the entire rendered depth texture as if it was a normal texture. So the further away the light is, the less bright the rendered depth texture will be. Avoiding this algorith entirely while maintaining correctness.
3. placement in the rendering pipeline.
The third thing described in this patent is the placement of this algorith in the pipeline. Now that we have chopped up the algorith into multiple passes, you can literaly place it almost anywhere you like, even avoiding the subpipeline of the algorith itself as described by the patent.
C'mon how hard was that?
Comment
-
I don't mean any algorithm for shadow mapping. The patent for float colorbuffers is actually US Patent #6,650,327, it is owned by SGI, and their statement in the ARB_texture_float specification is pretty clear (they sued ATI in the past because of it).
As a graphics developer (that's what I was paid for), I use float textures all the time for various effects, and I would not like to design a rendering engine without them. I guess VDrift developers would agree with me here. When I request floats, I don't want scaled ints, because my algorithms would not work with them, clear?
As a driver developer, I would not mind having float textures and colorbuffers in Mesa and the driver I maintain, but most Linux distributions will not enable it by default since it's potentially infringing.
Comment
-
6650237 covers ramming floatingpoints operations through graphics cards.
327, however, does not cover non-drivers, like game engines, unless they perform float geomatric calculations and acces the framebuffer themselves while doing it.
Why not have the driver convert floats to scaled ints? It requeres some work, but this could be seen as optimisation work as floating point operations are much slower than integers.
Comment
-
Because you would lose precision. float has 24 bits of precision, but the range is huge. 16-bit int has a sucky range, that's 5 fucking numbers, where would you put the fixed decimal point? There would be terrible losses on both sides of the point, and that's the best int my r500 can do.
Graphics algorithms are often tuned for the underlying data type to get the most from the least. If you change it, you will break it, and next day you get tons of bug reports.
Comment
Comment