Announcement
Collapse
No announcement yet.
NVIDIA Publicly Releases Its OpenCL Linux Drivers
Collapse
X
-
Ah! All very good points, but to avoid thread hijacking, I might start another where we can discuss it some more...
-
Originally posted by mirv View PostYou might want to quote the parts that talk about problems with ray tracing too. Otherwise you're just trolling.
PC Perspective: Ray tracing obviously has some advantages when it comes to high levels of geometry in a scene, but what are you doing to offset that advantage in traditional raster renderers?
David Kirk, NVIDIA: I'm not sure which specific advantages you are referring to, but I can cover some common misconceptions that are promulgated by the CPU ray tracing community. Some folks make the argument that rasterization is inherently slower because you must process and attempt to draw every triangle (even invisible ones)?thus, at best the execution time scales linearly with the number of triangles. Ray tracing advocates boast that a ray tracer with some sort of hierarchical acceleration data structure can run faster, because not every triangle must be drawn and that ray tracing will always be faster for complex scenes with lots of triangles, but this is provably false.
There are several fallacies in this line of thinking, but I will cover only two. First, the argument that the hierarchy allows the ray tracer to not visit all of the triangles ignores the fact that all triangles must be visited to build the hierarchy in the first place. Second, most rendering engines in games and professional applications that use rasterization also use hierarchy and culling to avoid visiting and drawing invisible triangles. Backface culling has long been used to avoid drawing triangles that are facing away from the viewer (the backsides of objects, hidden behind the front sides), and hierarchical culling can be used to avoid drawing entire chunks of the scene. Thus there is no inherent advantage in ray tracing vs. rasterization with respect to hierarchy and culling.
PC Perspective: Antialiasing is somewhat problematic for ray tracing, since the "rays" being cast either hit something, or they don?t. Hence post-processing effects might be problematic. Are there other limitations that ray tracing has that you are aware of?
PC Perspective: While the benefits of ray tracing do look compelling, why is it that NVIDIA and AMD/ATI have concentrated on the traditional rasterization architectures rather than going ray tracing?
David Kirk, NVIDIA: Reality intrudes into the most fantastic ideas and plans. Virtually all games and professional applications make use of the modern APIs for graphics: OpenGL(tm) and DirectX(tm). These APIs use rasterization, not ray tracing. So, the present environment is almost entirely rasterization-based. We would be foolish not to build hardware that runs current applications well.
PC Perspective: Is there an advantage in typical pixel shader effects with ray tracing or rasterization? Or do many of these effects work identically regardless?
David Kirk, NVIDIA: Whether rendering with rasterization or ray tracing, every visible surface needs to be shaded and lit or shadowed. Pixel shaders run very effectively on rasterization hardware and the coherence, or similarity, of nearby pixels is exploited by the processor architecture and special graphics hardware, such as texture caches. Ray tracers don't exploit that coherence in the same way. This is partly because a "shader" in a ray tracer often shoots more rays, for shadows, reflections, or other effects. There are other opportunities to exploit coherence in ray tracing, such as shooting bundles or packets of rays. These techniques introduce complexity into the ray tracing software, though.
PC Perspective: Do you see a convergence between ray tracing and rasterization? Or do the disadvantages of both render types make it unpalatable?
David Kirk, NVIDIA: I don't exactly see a convergence, but I do believe that hybrid rendering is the future.
PC Perspective: In terms of die size, which is more efficient in how they work?
David Kirk, NVIDIA: I don't think that ray tracing vs. rasterization has anything to do with die size. Rasterization hardware is very small and very high-performance, so it is an efficient use of silicon die size. Rasterization and ray tracing both require a lot of other processing, for geometry processing, shading, hierarchy traversal, and intersection calculations. GPU processor cores, whether accessed through graphics APIs or a C/C++ programming interface such as CUDA, are a very efficient use of silicon for processing.
PC Perspective: Because GPUs are becoming more general processing devices, do you think that next generation (or gen +2) would be able to handle some ray tracing routines? Would there be a need for them to handle those routines?
David Kirk, NVIDIA: There are GPU ray tracing programs now. Several have been published in research conferences such as Siggraph. Currently, those programs are roughly as fast as any CPU-based ray tracing program. I suspect that as people learn more about programming in CUDA and become more proficient at GPU computing, these programs will become significantly faster.
Leave a comment:
-
You might want to quote the parts that talk about problems with ray tracing too. Otherwise you're just trolling.
Leave a comment:
-
Originally posted by mirv View PostThis, and articles it links to, are a good read:
http://www.pcper.com/article.php?aid=530
Ray tracing is nice, but rasterisation isn't going anywhere.
Of course nVidia doesn't want to go out of business and throw a shitload of R&D money and aquired IP and patents out of the window.
The point is that raytracing looks better and is a more sophisticated rendering technique that allows for more detail and truly round objects. The only downside to it is that it is more calculation intensive and so the only reason it is not mainstream is because desktop PC's do not have the calculation power yet.
People also say that you need an API and that's nonsence, since raytracing is extremely easy to code and takes less effort than coding a rasterising renderer. It is also cheaper to develop because with raytracing one is not bound to just one type of game engine. You can make one engine for an endless amount of different games.
I was about to make one in OpenCL but because there is no such good support yet so I decided to start making one in software for Haiku because it is extremely fast, easy to code for, and can later be optimised with OpenCL kernels later on as Gallium3D lands with HW acceleration for my ATI card.
Leave a comment:
-
This, and articles it links to, are a good read:
http://www.pcper.com/article.php?aid=530
Ray tracing is nice, but rasterisation isn't going anywhere.
Leave a comment:
-
Originally posted by mirv View PostNobody bite, it's a trap!A Pool-Billard-Table rendered via using a raytracer in Nvidias Optix-package. Utilizing the available FLOPS of the GPU.For more info: See also the article on...
GPU raytracing
This video shows a progression of ray-traced shaders executing on a cluster of IBM QS20 Cell blades. The model comprises over 300,000 triangles and renders a...
Three years, max...Last edited by V!NCENT; 05 October 2009, 08:59 PM.
Leave a comment:
-
OpenCL could be used for the next-generation graphics -> Ray Tracing. It has the power to succeed OpenGL entirely.
Leave a comment:
-
Originally posted by Ranguvar View PostOy vey.
Basically, CPUs are general-purpose processors, and GPUs are special-purpose processors (again, _basically_!). [...] start making the GPU more general-purpose, which defeats the purpose of the GPU in the first place.
GPUs are becoming more general-purpose, generation by generation. They are becoming suitable for increasingly more complex tasks: five years ago, you could hardly squeeze a perlin generator. Nowadays you can generate complete voxel landscapes, accelerate F@H or even Photoshop.
I'm not saying that GPUs will ever reach CPUs in versatility (their massively parallel design prohibits that) - however, there's no indication that they'll stop becoming more generic in the near future.
Yes, you could write an awesome encoder and have it offload some portions of the encoding task to the GPU for an appreciateable speed boost. [...] The x264 devs have discussed this. The problem is (from what they've said), that GPU programming is _hard_. Or, at least hard to get it anywhere near useful for x264, which has the vast majority of its processor-intensive code written in assembly
So basically, from what I understand, they've said "Patches welcome, it's too hard for too little gain for us at least for now."
@Blackstar: Most general-purpose GPU "research", etc. is just that -- research. If I may quote the lead x264 devs (akupenguin and DS, source: http://mirror05.x264.nl/Dark/loren.html):
<Dark_Shikari> ok, its a motion estimation algorithm in a paper
<Dark_Shikari> 99% chance its totally useless
<wally4u> because?
<pengvado> because there have been about 4 useful motion estimation papers ever, and a lot more than 400 attempts
You can bet that as soon as a researcher produces a viable solution, everyone and their mother will rush to implement it (first to market and all that). In a little while, even Nero will have it.
Remember the rush to implement and relief mapping a few years ago? The situation is pretty similar: at first, the technology was out of our reach - we didn't have the hardware, experience (or even drivers!) to implement this. Then someone came up with a viable approach, the hardware got better and people rushed to implement this cool new effect. Nowadays, if a game or engine demo doesn't display a form of relief mapping, its graphics are bashed as obsolete!
The majority of the rest is marketing chest-thumping by those who would benefit. AMD, NVIDIA, etc. Show me a _real-world_ test where a believeably configured PC (GTX 280s are not paired with Intel Atoms) with a GPU encoder beats a current build of x264's high-speed presets at the same quality (or better quality at same speed), and I'll eat my words
OpenCL is cool, and I think it's important in quite a few areas (Folding@home style computations and video decoding come to mind), but they've really gotta stop trying to make it seem like a silver bullet. A GPU is special-purpose. You can make it general-purpose -- but then you have a CPU anyways.
With good reason, too. CPUs are getting more and more parallel but parallel programming is still something of a dark art. Anything that makes our lives easier is more than welcome: OpenCL, DirectCompute, Parallel.Net...
Also, don't forget that OpenCL can be used on more than just GPUs.Last edited by BlackStar; 05 October 2009, 02:36 PM.
Leave a comment:
-
Oy vey.
Basically, CPUs are general-purpose processors, and GPUs are special-purpose processors (again, _basically_!). The GPU can do some things at a speed that destroys any CPU at that price point, including a couple handy things for decoding video. However, the world's most awesome GPU API and encoder still won't make a GPU faster than a CPU for a task it's not designed for, unless your GPU is way more powerful than the CPU -- or they start making the GPU more general-purpose, which defeats the purpose of the GPU in the first place. And no, OpenCL will not help budget buyers -- their GPU is their CPU anyways!
Yes, you could write an awesome encoder and have it offload some portions of the encoding task to the GPU for an appreciateable speed boost. The GPU _can_ handle general computations, just not as well as the CPU, so it would still be some extra muscle (and a very select few parts of the encoding process could work very well on a GPU). The x264 devs have discussed this. The problem is (from what they've said), that GPU programming is _hard_. Or, at least hard to get it anywhere near useful for x264, which has the vast majority of its processor-intensive code written in assembly
So basically, from what I understand, they've said "Patches welcome, it's too hard for too little gain for us at least for now."
@Blackstar: Most general-purpose GPU "research", etc. is just that -- research. If I may quote the lead x264 devs (akupenguin and DS, source: http://mirror05.x264.nl/Dark/loren.html):
<Dark_Shikari> ok, its a motion estimation algorithm in a paper
<Dark_Shikari> 99% chance its totally useless
<wally4u> because?
<pengvado> because there have been about 4 useful motion estimation papers ever, and a lot more than 400 attempts
The majority of the rest is marketing chest-thumping by those who would benefit. AMD, NVIDIA, etc. Show me a _real-world_ test where a believeably configured PC (GTX 280s are not paired with Intel Atoms) with a GPU encoder beats a current build of x264's high-speed presets at the same quality (or better quality at same speed), and I'll eat my words
OpenCL is cool, and I think it's important in quite a few areas (Folding@home style computations and video decoding come to mind), but they've really gotta stop trying to make it seem like a silver bullet. A GPU is special-purpose. You can make it general-purpose -- but then you have a CPU anyways.Last edited by Ranguvar; 03 October 2009, 01:24 AM.
Leave a comment:
Leave a comment: