Announcement

Collapse
No announcement yet.

NVIDIA Publicly Releases Its OpenCL Linux Drivers

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • mirv
    replied
    Ah! All very good points, but to avoid thread hijacking, I might start another where we can discuss it some more...

    Leave a comment:


  • V!NCENT
    replied
    Originally posted by mirv View Post
    You might want to quote the parts that talk about problems with ray tracing too. Otherwise you're just trolling.
    That's fine, since it's not my intention to troll, but it will be a lot of text

    PC Perspective: Ray tracing obviously has some advantages when it comes to high levels of geometry in a scene, but what are you doing to offset that advantage in traditional raster renderers?

    David Kirk, NVIDIA: I'm not sure which specific advantages you are referring to, but I can cover some common misconceptions that are promulgated by the CPU ray tracing community. Some folks make the argument that rasterization is inherently slower because you must process and attempt to draw every triangle (even invisible ones)?thus, at best the execution time scales linearly with the number of triangles. Ray tracing advocates boast that a ray tracer with some sort of hierarchical acceleration data structure can run faster, because not every triangle must be drawn and that ray tracing will always be faster for complex scenes with lots of triangles, but this is provably false.

    There are several fallacies in this line of thinking, but I will cover only two. First, the argument that the hierarchy allows the ray tracer to not visit all of the triangles ignores the fact that all triangles must be visited to build the hierarchy in the first place. Second, most rendering engines in games and professional applications that use rasterization also use hierarchy and culling to avoid visiting and drawing invisible triangles. Backface culling has long been used to avoid drawing triangles that are facing away from the viewer (the backsides of objects, hidden behind the front sides), and hierarchical culling can be used to avoid drawing entire chunks of the scene. Thus there is no inherent advantage in ray tracing vs. rasterization with respect to hierarchy and culling.
    First of all nVidia is trolling here, because high levels of geometry are not possible with triangles, simple because triangle will be triangles: no truly rounded shapes. It is true that triangle rendering is faster than raytracing up to the point that it reaches criticle mass. For example: Crytek, the engine that powers Crysis. The reason this engine is so beloved is not because of it's graphical effects, but because it pushes the limits of triangle rendering: you can have an entire forest with shadows rendered. It it almost reaching the point where ray tracing actually becomes faster in doing this. Why is that? It is because once you have the compute power to trace rays from every pixel on your screen, geometry detail doesn't slow down the rendering process; it doesn't matter if you are in a square room or in a forest: the computations remain the same.

    PC Perspective: Antialiasing is somewhat problematic for ray tracing, since the "rays" being cast either hit something, or they don?t. Hence post-processing effects might be problematic. Are there other limitations that ray tracing has that you are aware of?
    This is absolute nonsence! Each ray can contain information about the level of brightness for example and so you can push it through post processing HDR and all that stuff. The second is that you can't do anti-aliasing. That is also totally not the case because you could cast rays from the center of a square of four pixels and do a some kind of texture filtering post processing. This way you can even enhance the rendered image like it happens in reality->tv concersion; by applying different, 'non-square matrixes' you can almost achieve an analog PAL/NTSC type of picture with about 600x800 rays. It doesn't cost the overhead of AA in rasterising rendering, which can be a serious bottleneck in rasterising.

    PC Perspective: While the benefits of ray tracing do look compelling, why is it that NVIDIA and AMD/ATI have concentrated on the traditional rasterization architectures rather than going ray tracing?

    David Kirk, NVIDIA: Reality intrudes into the most fantastic ideas and plans. Virtually all games and professional applications make use of the modern APIs for graphics: OpenGL(tm) and DirectX(tm). These APIs use rasterization, not ray tracing. So, the present environment is almost entirely rasterization-based. We would be foolish not to build hardware that runs current applications well.
    Which is why I am settping up to the plate to make an implementation.

    PC Perspective: Is there an advantage in typical pixel shader effects with ray tracing or rasterization? Or do many of these effects work identically regardless?

    David Kirk, NVIDIA: Whether rendering with rasterization or ray tracing, every visible surface needs to be shaded and lit or shadowed. Pixel shaders run very effectively on rasterization hardware and the coherence, or similarity, of nearby pixels is exploited by the processor architecture and special graphics hardware, such as texture caches. Ray tracers don't exploit that coherence in the same way. This is partly because a "shader" in a ray tracer often shoots more rays, for shadows, reflections, or other effects. There are other opportunities to exploit coherence in ray tracing, such as shooting bundles or packets of rays. These techniques introduce complexity into the ray tracing software, though.
    Again, complete BS; since it depends on the way you code it. You can have more efficiency by casting rays on matrixed textures, and then only cast rays in the part of the texture that has been raycasted. nVidia is either trying to cover it's ass, or I am just plain genious

    PC Perspective: Do you see a convergence between ray tracing and rasterization? Or do the disadvantages of both render types make it unpalatable?

    David Kirk, NVIDIA: I don't exactly see a convergence, but I do believe that hybrid rendering is the future.
    I see this as an intermediate step in the path to raytracing. The reason for this is that very soon CPU's will have so much multithreading that is doesn't matter, and with deadlines professional programmers just take the easy, dirty and lazy steps and hybrid rendering is just too complex and takes up more time. It also depends on what implementations there are in the future and in the end implementations decide how stuff get's done, and not nVidia. Sorry guys...

    PC Perspective: In terms of die size, which is more efficient in how they work?

    David Kirk, NVIDIA: I don't think that ray tracing vs. rasterization has anything to do with die size. Rasterization hardware is very small and very high-performance, so it is an efficient use of silicon die size. Rasterization and ray tracing both require a lot of other processing, for geometry processing, shading, hierarchy traversal, and intersection calculations. GPU processor cores, whether accessed through graphics APIs or a C/C++ programming interface such as CUDA, are a very efficient use of silicon for processing.
    Rasterization hardware is only relevant untill CPU's get powerfull enough. Ofcourse today rasterization is king, but because it's the only way, but the future will surely change that simply because GPU's are additional costs and no-one will buy GPU's when there will be powerfull CPU's that can handle everything and have on-die GPU's for desktop compisiting and video playback. Green computing also comes to mind.

    PC Perspective: Because GPUs are becoming more general processing devices, do you think that next generation (or gen +2) would be able to handle some ray tracing routines? Would there be a need for them to handle those routines?

    David Kirk, NVIDIA: There are GPU ray tracing programs now. Several have been published in research conferences such as Siggraph. Currently, those programs are roughly as fast as any CPU-based ray tracing program. I suspect that as people learn more about programming in CUDA and become more proficient at GPU computing, these programs will become significantly faster.
    Couldn't have said it better...

    Leave a comment:


  • mirv
    replied
    You might want to quote the parts that talk about problems with ray tracing too. Otherwise you're just trolling.

    Leave a comment:


  • V!NCENT
    replied
    Originally posted by mirv View Post
    This, and articles it links to, are a good read:

    http://www.pcper.com/article.php?aid=530


    Ray tracing is nice, but rasterisation isn't going anywhere.
    "There are many people that don't see ray tracing as the holy grail of gaming graphics, however. A corporation like NVIDIA, that has a vested interested in graphics beyond the scale of any other organization today, has to take a more pragmatic look at rendering technologies including both rasterization and ray tracing; unlike Intel they have decades of high-end rasterization research behind them and see the future of 3D graphics remaining with that technology rather than switching to something new like ray tracing. "

    Of course nVidia doesn't want to go out of business and throw a shitload of R&D money and aquired IP and patents out of the window.

    The point is that raytracing looks better and is a more sophisticated rendering technique that allows for more detail and truly round objects. The only downside to it is that it is more calculation intensive and so the only reason it is not mainstream is because desktop PC's do not have the calculation power yet.

    People also say that you need an API and that's nonsence, since raytracing is extremely easy to code and takes less effort than coding a rasterising renderer. It is also cheaper to develop because with raytracing one is not bound to just one type of game engine. You can make one engine for an endless amount of different games.

    I was about to make one in OpenCL but because there is no such good support yet so I decided to start making one in software for Haiku because it is extremely fast, easy to code for, and can later be optimised with OpenCL kernels later on as Gallium3D lands with HW acceleration for my ATI card.

    Leave a comment:


  • mirv
    replied
    This, and articles it links to, are a good read:

    http://www.pcper.com/article.php?aid=530


    Ray tracing is nice, but rasterisation isn't going anywhere.

    Leave a comment:


  • V!NCENT
    replied
    Originally posted by mirv View Post
    Nobody bite, it's a trap!
    A Pool-Billard-Table rendered via using a raytracer in Nvidias Optix-package. Utilizing the available FLOPS of the GPU.For more info: See also the article on...

    GPU raytracing

    This video shows a progression of ray-traced shaders executing on a cluster of IBM QS20 Cell blades. The model comprises over 300,000 triangles and renders a...

    Three years, max...
    Last edited by V!NCENT; 05 October 2009, 08:59 PM.

    Leave a comment:


  • mirv
    replied
    Originally posted by V!NCENT View Post
    OpenCL could be used for the next-generation graphics -> Ray Tracing. It has the power to succeed OpenGL entirely.
    Nobody bite, it's a trap!

    Leave a comment:


  • V!NCENT
    replied
    OpenCL could be used for the next-generation graphics -> Ray Tracing. It has the power to succeed OpenGL entirely.

    Leave a comment:


  • BlackStar
    replied
    Originally posted by Ranguvar View Post
    Oy vey.

    Basically, CPUs are general-purpose processors, and GPUs are special-purpose processors (again, _basically_!). [...] start making the GPU more general-purpose, which defeats the purpose of the GPU in the first place.
    Cough, Larabee, cough. Not to mention the last 9 years in GPU design.

    GPUs are becoming more general-purpose, generation by generation. They are becoming suitable for increasingly more complex tasks: five years ago, you could hardly squeeze a perlin generator. Nowadays you can generate complete voxel landscapes, accelerate F@H or even Photoshop.

    I'm not saying that GPUs will ever reach CPUs in versatility (their massively parallel design prohibits that) - however, there's no indication that they'll stop becoming more generic in the near future.

    Yes, you could write an awesome encoder and have it offload some portions of the encoding task to the GPU for an appreciateable speed boost. [...] The x264 devs have discussed this. The problem is (from what they've said), that GPU programming is _hard_. Or, at least hard to get it anywhere near useful for x264, which has the vast majority of its processor-intensive code written in assembly

    So basically, from what I understand, they've said "Patches welcome, it's too hard for too little gain for us at least for now."
    Obviously, GPGPU is a very new field. It's about two years old, in fact (introduction of G80), whereas we have more than 30 years of accumulated CPU programming experience and tools.

    @Blackstar: Most general-purpose GPU "research", etc. is just that -- research. If I may quote the lead x264 devs (akupenguin and DS, source: http://mirror05.x264.nl/Dark/loren.html):

    <Dark_Shikari> ok, its a motion estimation algorithm in a paper
    <Dark_Shikari> 99% chance its totally useless
    <wally4u> because?
    <pengvado> because there have been about 4 useful motion estimation papers ever, and a lot more than 400 attempts
    Research is research, duh - what else would it be?

    You can bet that as soon as a researcher produces a viable solution, everyone and their mother will rush to implement it (first to market and all that). In a little while, even Nero will have it.

    Remember the rush to implement and relief mapping a few years ago? The situation is pretty similar: at first, the technology was out of our reach - we didn't have the hardware, experience (or even drivers!) to implement this. Then someone came up with a viable approach, the hardware got better and people rushed to implement this cool new effect. Nowadays, if a game or engine demo doesn't display a form of relief mapping, its graphics are bashed as obsolete!

    The majority of the rest is marketing chest-thumping by those who would benefit. AMD, NVIDIA, etc. Show me a _real-world_ test where a believeably configured PC (GTX 280s are not paired with Intel Atoms) with a GPU encoder beats a current build of x264's high-speed presets at the same quality (or better quality at same speed), and I'll eat my words

    OpenCL is cool, and I think it's important in quite a few areas (Folding@home style computations and video decoding come to mind), but they've really gotta stop trying to make it seem like a silver bullet. A GPU is special-purpose. You can make it general-purpose -- but then you have a CPU anyways.
    I don't recall anyone claiming that OpenCL is a silver bullet - but maybe I just don't pay attention to marketing attempts. Actually, it is developers who have undeniably shown a high level of interest in the technology.

    With good reason, too. CPUs are getting more and more parallel but parallel programming is still something of a dark art. Anything that makes our lives easier is more than welcome: OpenCL, DirectCompute, Parallel.Net...

    Also, don't forget that OpenCL can be used on more than just GPUs.
    Last edited by BlackStar; 05 October 2009, 02:36 PM.

    Leave a comment:


  • Ranguvar
    replied
    Oy vey.

    Basically, CPUs are general-purpose processors, and GPUs are special-purpose processors (again, _basically_!). The GPU can do some things at a speed that destroys any CPU at that price point, including a couple handy things for decoding video. However, the world's most awesome GPU API and encoder still won't make a GPU faster than a CPU for a task it's not designed for, unless your GPU is way more powerful than the CPU -- or they start making the GPU more general-purpose, which defeats the purpose of the GPU in the first place. And no, OpenCL will not help budget buyers -- their GPU is their CPU anyways!

    Yes, you could write an awesome encoder and have it offload some portions of the encoding task to the GPU for an appreciateable speed boost. The GPU _can_ handle general computations, just not as well as the CPU, so it would still be some extra muscle (and a very select few parts of the encoding process could work very well on a GPU). The x264 devs have discussed this. The problem is (from what they've said), that GPU programming is _hard_. Or, at least hard to get it anywhere near useful for x264, which has the vast majority of its processor-intensive code written in assembly

    So basically, from what I understand, they've said "Patches welcome, it's too hard for too little gain for us at least for now."


    @Blackstar: Most general-purpose GPU "research", etc. is just that -- research. If I may quote the lead x264 devs (akupenguin and DS, source: http://mirror05.x264.nl/Dark/loren.html):

    <Dark_Shikari> ok, its a motion estimation algorithm in a paper
    <Dark_Shikari> 99% chance its totally useless
    <wally4u> because?
    <pengvado> because there have been about 4 useful motion estimation papers ever, and a lot more than 400 attempts


    The majority of the rest is marketing chest-thumping by those who would benefit. AMD, NVIDIA, etc. Show me a _real-world_ test where a believeably configured PC (GTX 280s are not paired with Intel Atoms) with a GPU encoder beats a current build of x264's high-speed presets at the same quality (or better quality at same speed), and I'll eat my words

    OpenCL is cool, and I think it's important in quite a few areas (Folding@home style computations and video decoding come to mind), but they've really gotta stop trying to make it seem like a silver bullet. A GPU is special-purpose. You can make it general-purpose -- but then you have a CPU anyways.
    Last edited by Ranguvar; 03 October 2009, 01:24 AM.

    Leave a comment:

Working...
X