Announcement

Collapse
No announcement yet.

NVIDIA Publicly Releases Its OpenCL Linux Drivers

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by BlackStar View Post
    Unfortunately, you'd need an OpenCL-capable GPU for this - and OpenCL-capable GPUs tend to have dedicated (and faster!) video decoding blocks already. Given the shared buffers between OpenCL/OpenGL, this would probably be a viable approach for video decoding on Gallium drivers (unless there are plans to add a dedicated video decoding API/tracker - no idea).

    Now, OpenCL-based encoding and we are talking.
    I'm going to have to disagree with you here- the folks behind x264 (the lead programmers, plus one of the major company supporters) aren't going to be doing anything with it, because it actually doesn't work as well for video encoding as the hype would make you believe.

    Comment


    • #42
      Strange thing is, I'm highly considering to buy an ATi graphics card just because it's the first dx11 card out there.. Even though using such a card in linux is not viable at this stage and maybe not for the next year or so.. =/

      Personally, I'm thinking OpenCL is revolutionary. I bet in the next 4 years there will be some really cool stuff out there that takes advantage of this powerful tool. =)

      Comment


      • #43
        Oy vey.

        Basically, CPUs are general-purpose processors, and GPUs are special-purpose processors (again, _basically_!). The GPU can do some things at a speed that destroys any CPU at that price point, including a couple handy things for decoding video. However, the world's most awesome GPU API and encoder still won't make a GPU faster than a CPU for a task it's not designed for, unless your GPU is way more powerful than the CPU -- or they start making the GPU more general-purpose, which defeats the purpose of the GPU in the first place. And no, OpenCL will not help budget buyers -- their GPU is their CPU anyways!

        Yes, you could write an awesome encoder and have it offload some portions of the encoding task to the GPU for an appreciateable speed boost. The GPU _can_ handle general computations, just not as well as the CPU, so it would still be some extra muscle (and a very select few parts of the encoding process could work very well on a GPU). The x264 devs have discussed this. The problem is (from what they've said), that GPU programming is _hard_. Or, at least hard to get it anywhere near useful for x264, which has the vast majority of its processor-intensive code written in assembly

        So basically, from what I understand, they've said "Patches welcome, it's too hard for too little gain for us at least for now."


        @Blackstar: Most general-purpose GPU "research", etc. is just that -- research. If I may quote the lead x264 devs (akupenguin and DS, source: http://mirror05.x264.nl/Dark/loren.html):

        <Dark_Shikari> ok, its a motion estimation algorithm in a paper
        <Dark_Shikari> 99% chance its totally useless
        <wally4u> because?
        <pengvado> because there have been about 4 useful motion estimation papers ever, and a lot more than 400 attempts


        The majority of the rest is marketing chest-thumping by those who would benefit. AMD, NVIDIA, etc. Show me a _real-world_ test where a believeably configured PC (GTX 280s are not paired with Intel Atoms) with a GPU encoder beats a current build of x264's high-speed presets at the same quality (or better quality at same speed), and I'll eat my words

        OpenCL is cool, and I think it's important in quite a few areas (Folding@home style computations and video decoding come to mind), but they've really gotta stop trying to make it seem like a silver bullet. A GPU is special-purpose. You can make it general-purpose -- but then you have a CPU anyways.
        Last edited by Ranguvar; 03 October 2009, 01:24 AM.

        Comment


        • #44
          Originally posted by Ranguvar View Post
          Oy vey.

          Basically, CPUs are general-purpose processors, and GPUs are special-purpose processors (again, _basically_!). [...] start making the GPU more general-purpose, which defeats the purpose of the GPU in the first place.
          Cough, Larabee, cough. Not to mention the last 9 years in GPU design.

          GPUs are becoming more general-purpose, generation by generation. They are becoming suitable for increasingly more complex tasks: five years ago, you could hardly squeeze a perlin generator. Nowadays you can generate complete voxel landscapes, accelerate F@H or even Photoshop.

          I'm not saying that GPUs will ever reach CPUs in versatility (their massively parallel design prohibits that) - however, there's no indication that they'll stop becoming more generic in the near future.

          Yes, you could write an awesome encoder and have it offload some portions of the encoding task to the GPU for an appreciateable speed boost. [...] The x264 devs have discussed this. The problem is (from what they've said), that GPU programming is _hard_. Or, at least hard to get it anywhere near useful for x264, which has the vast majority of its processor-intensive code written in assembly

          So basically, from what I understand, they've said "Patches welcome, it's too hard for too little gain for us at least for now."
          Obviously, GPGPU is a very new field. It's about two years old, in fact (introduction of G80), whereas we have more than 30 years of accumulated CPU programming experience and tools.

          @Blackstar: Most general-purpose GPU "research", etc. is just that -- research. If I may quote the lead x264 devs (akupenguin and DS, source: http://mirror05.x264.nl/Dark/loren.html):

          <Dark_Shikari> ok, its a motion estimation algorithm in a paper
          <Dark_Shikari> 99% chance its totally useless
          <wally4u> because?
          <pengvado> because there have been about 4 useful motion estimation papers ever, and a lot more than 400 attempts
          Research is research, duh - what else would it be?

          You can bet that as soon as a researcher produces a viable solution, everyone and their mother will rush to implement it (first to market and all that). In a little while, even Nero will have it.

          Remember the rush to implement and relief mapping a few years ago? The situation is pretty similar: at first, the technology was out of our reach - we didn't have the hardware, experience (or even drivers!) to implement this. Then someone came up with a viable approach, the hardware got better and people rushed to implement this cool new effect. Nowadays, if a game or engine demo doesn't display a form of relief mapping, its graphics are bashed as obsolete!

          The majority of the rest is marketing chest-thumping by those who would benefit. AMD, NVIDIA, etc. Show me a _real-world_ test where a believeably configured PC (GTX 280s are not paired with Intel Atoms) with a GPU encoder beats a current build of x264's high-speed presets at the same quality (or better quality at same speed), and I'll eat my words

          OpenCL is cool, and I think it's important in quite a few areas (Folding@home style computations and video decoding come to mind), but they've really gotta stop trying to make it seem like a silver bullet. A GPU is special-purpose. You can make it general-purpose -- but then you have a CPU anyways.
          I don't recall anyone claiming that OpenCL is a silver bullet - but maybe I just don't pay attention to marketing attempts. Actually, it is developers who have undeniably shown a high level of interest in the technology.

          With good reason, too. CPUs are getting more and more parallel but parallel programming is still something of a dark art. Anything that makes our lives easier is more than welcome: OpenCL, DirectCompute, Parallel.Net...

          Also, don't forget that OpenCL can be used on more than just GPUs.
          Last edited by BlackStar; 05 October 2009, 02:36 PM.

          Comment


          • #45
            OpenCL could be used for the next-generation graphics -> Ray Tracing. It has the power to succeed OpenGL entirely.

            Comment


            • #46
              Originally posted by V!NCENT View Post
              OpenCL could be used for the next-generation graphics -> Ray Tracing. It has the power to succeed OpenGL entirely.
              Nobody bite, it's a trap!

              Comment


              • #47
                Originally posted by mirv View Post
                Nobody bite, it's a trap!
                A Pool-Billard-Table rendered via using a raytracer in Nvidias Optix-package. Utilizing the available FLOPS of the GPU.For more info: See also the article on...

                GPU raytracing

                This video shows a progression of ray-traced shaders executing on a cluster of IBM QS20 Cell blades. The model comprises over 300,000 triangles and renders a...

                Three years, max...
                Last edited by V!NCENT; 05 October 2009, 08:59 PM.

                Comment


                • #48
                  This, and articles it links to, are a good read:

                  http://www.pcper.com/article.php?aid=530


                  Ray tracing is nice, but rasterisation isn't going anywhere.

                  Comment


                  • #49
                    Originally posted by mirv View Post
                    This, and articles it links to, are a good read:

                    http://www.pcper.com/article.php?aid=530


                    Ray tracing is nice, but rasterisation isn't going anywhere.
                    "There are many people that don't see ray tracing as the holy grail of gaming graphics, however. A corporation like NVIDIA, that has a vested interested in graphics beyond the scale of any other organization today, has to take a more pragmatic look at rendering technologies including both rasterization and ray tracing; unlike Intel they have decades of high-end rasterization research behind them and see the future of 3D graphics remaining with that technology rather than switching to something new like ray tracing. "

                    Of course nVidia doesn't want to go out of business and throw a shitload of R&D money and aquired IP and patents out of the window.

                    The point is that raytracing looks better and is a more sophisticated rendering technique that allows for more detail and truly round objects. The only downside to it is that it is more calculation intensive and so the only reason it is not mainstream is because desktop PC's do not have the calculation power yet.

                    People also say that you need an API and that's nonsence, since raytracing is extremely easy to code and takes less effort than coding a rasterising renderer. It is also cheaper to develop because with raytracing one is not bound to just one type of game engine. You can make one engine for an endless amount of different games.

                    I was about to make one in OpenCL but because there is no such good support yet so I decided to start making one in software for Haiku because it is extremely fast, easy to code for, and can later be optimised with OpenCL kernels later on as Gallium3D lands with HW acceleration for my ATI card.

                    Comment


                    • #50
                      You might want to quote the parts that talk about problems with ray tracing too. Otherwise you're just trolling.

                      Comment

                      Working...
                      X