Announcement

Collapse
No announcement yet.

NVIDIA Publicly Releases Its OpenCL Linux Drivers

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Ranguvar
    replied
    Oy vey.

    Basically, CPUs are general-purpose processors, and GPUs are special-purpose processors (again, _basically_!). The GPU can do some things at a speed that destroys any CPU at that price point, including a couple handy things for decoding video. However, the world's most awesome GPU API and encoder still won't make a GPU faster than a CPU for a task it's not designed for, unless your GPU is way more powerful than the CPU -- or they start making the GPU more general-purpose, which defeats the purpose of the GPU in the first place. And no, OpenCL will not help budget buyers -- their GPU is their CPU anyways!

    Yes, you could write an awesome encoder and have it offload some portions of the encoding task to the GPU for an appreciateable speed boost. The GPU _can_ handle general computations, just not as well as the CPU, so it would still be some extra muscle (and a very select few parts of the encoding process could work very well on a GPU). The x264 devs have discussed this. The problem is (from what they've said), that GPU programming is _hard_. Or, at least hard to get it anywhere near useful for x264, which has the vast majority of its processor-intensive code written in assembly

    So basically, from what I understand, they've said "Patches welcome, it's too hard for too little gain for us at least for now."


    @Blackstar: Most general-purpose GPU "research", etc. is just that -- research. If I may quote the lead x264 devs (akupenguin and DS, source: http://mirror05.x264.nl/Dark/loren.html):

    <Dark_Shikari> ok, its a motion estimation algorithm in a paper
    <Dark_Shikari> 99% chance its totally useless
    <wally4u> because?
    <pengvado> because there have been about 4 useful motion estimation papers ever, and a lot more than 400 attempts


    The majority of the rest is marketing chest-thumping by those who would benefit. AMD, NVIDIA, etc. Show me a _real-world_ test where a believeably configured PC (GTX 280s are not paired with Intel Atoms) with a GPU encoder beats a current build of x264's high-speed presets at the same quality (or better quality at same speed), and I'll eat my words

    OpenCL is cool, and I think it's important in quite a few areas ([email protected] style computations and video decoding come to mind), but they've really gotta stop trying to make it seem like a silver bullet. A GPU is special-purpose. You can make it general-purpose -- but then you have a CPU anyways.
    Last edited by Ranguvar; 10-03-2009, 01:24 AM.

    Leave a comment:


  • b15hop
    replied
    Strange thing is, I'm highly considering to buy an ATi graphics card just because it's the first dx11 card out there.. Even though using such a card in linux is not viable at this stage and maybe not for the next year or so.. =/

    Personally, I'm thinking OpenCL is revolutionary. I bet in the next 4 years there will be some really cool stuff out there that takes advantage of this powerful tool. =)

    Leave a comment:


  • dashcloud
    replied
    Originally posted by BlackStar View Post
    Unfortunately, you'd need an OpenCL-capable GPU for this - and OpenCL-capable GPUs tend to have dedicated (and faster!) video decoding blocks already. Given the shared buffers between OpenCL/OpenGL, this would probably be a viable approach for video decoding on Gallium drivers (unless there are plans to add a dedicated video decoding API/tracker - no idea).

    Now, OpenCL-based encoding and we are talking.
    I'm going to have to disagree with you here- the folks behind x264 (the lead programmers, plus one of the major company supporters) aren't going to be doing anything with it, because it actually doesn't work as well for video encoding as the hype would make you believe.

    Leave a comment:


  • V!NCENT
    replied
    Originally posted by V!NCENT View Post
    It's worth watching this video tutorial on nVidia hardware:
    http://www.macresearch.org/opencl_episode4
    PS: It's a .m4v file and 124MB, so you might want to use VLC for playback because you can also stream the link so you can instantly watch it.

    Leave a comment:


  • V!NCENT
    replied
    Originally posted by justapost View Post
    I ran all tests with standard parameters and I think the overhead caused by memory transfers and initialization causes some of the low gfx results.
    It's worth watching this video tutorial on nVidia hardware:
    http://www.macresearch.org/opencl_episode4

    Leave a comment:


  • nanonyme
    replied
    Originally posted by V!NCENT View Post
    Wow that sounds awesome... And just to think that OpenCL already allows extensions. That's what the LLVM is for when JIT compiling right? It looks for extensions that are available and sends kernel code to whatever extension it can find, else it executes it without extension, right?
    As long as usage of extensions isn't required and the program doesn't see they're used, it doesn't matter that much. On the other paw, if an OpenCL program gets the ability to directly use OpenCL extensions which might have multiple different implementations and possibly bad (read: leave a lot for vendor interpretation) specifications, we end up in trouble.

    Leave a comment:


  • justapost
    replied
    Today I wrote a test profile for the ati-stream-sdk. Afterwards I installed nvidia drivers and voila many of the ati examples run fine with nvidias libOpenGL.so.

    Here are a few results:
    http://global.phoronix-test-suite.co...15-22137-12990

    I ran all tests with standard parameters and I think the overhead caused by memory transfers and initialization causes some of the low gfx results.

    Here are results for one test with different matrix sizes.

    http://global.phoronix-test-suite.co...5145-2893-4183

    Leave a comment:


  • mirv
    replied
    Ah ok, I wasn't sure if the drivers were attached to more system "pass it off to whatever's free" or simply standalone. I do find OpenCL interesting, but haven't had an opportunity to work with it.

    Leave a comment:


  • V!NCENT
    replied
    Originally posted by nanonyme View Post
    We might quickly bump into the good old territory there too as what happened with C library implementations: the specification left tons of gaps and when vendors implemented these as they saw fit, we were left with libc implementations that aren't fully compatible with each other. (and that's just before some smart-asses thought it's a good idea to start extending them way beyond specifications)
    Wow that sounds awesome... And just to think that OpenCL already allows extensions. That's what the LLVM is for when JIT compiling right? It looks for extensions that are available and sends kernel code to whatever extension it can find, else it executes it without extension, right?

    Leave a comment:


  • nanonyme
    replied
    Originally posted by V!NCENT View Post
    Ehm... OpenCL is a specification just like OpenGL and you can have multiple implementations. That is it. The implementations can differ hugely between implementations. The only requirements is that OpenCL code can be compiled and executed according to the spec.
    We might quickly bump into the good old territory there too as what happened with C library implementations: the specification left tons of gaps and when vendors implemented these as they saw fit, we were left with libc implementations that aren't fully compatible with each other. (and that's just before some smart-asses thought it's a good idea to start extending them way beyond specifications)

    Leave a comment:

Working...
X