Announcement

Collapse
No announcement yet.

NVIDIA Releases Standalone VDPAU Library

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by myxal View Post
    I'd say video acceleration for R600/R700 is simply not coming to opensource drivers (not from official sources), we might see it in fglrx if we're lucky.
    It's a damn shame.

    Originally posted by myxal View Post
    Such is the sorry state of linux graphics - my choices are intel (drivers in quantum state, getting all the features to work with adequate performance is next to impossible for a casual Linux user), AMD (no opensource 3D for cards less than 4 years old, proprietary driver has problems with some basic functionality (Xv) and switching between computer power modes), nvidia (no opensource driver worth using yet, proprietary driver mostly works, but lacks some common features (XR&R1.2), and there's always the creeping shadow of nvidia's 'Bumpgate'). Other graphics vendors have no drivers/hardware worth using.
    That sounds about right. I wish someone would write an article (HEY PHORONIX, I'M TALKING TO YOU! on the lines of "The Sorry State of Sound on Linux". At least no one has figured out a way to integrate PulseAudio into the X stack to mess things up

    Comment


    • #12
      Originally posted by myxal View Post
      I'd say video acceleration for R600/R700 is simply not coming to opensource drivers (not from official sources), we might see it in fglrx if we're lucky.
      I don't believe that that is an entirely accurate statement. There are different levels of video acceleration... the difference is in how much of the decode process is accelerated. Right now we DO have acceleration -- though only very basic Xv. Playing a full-HD video right now *does* peg any CPU that isn't at least a fairly recent 2-core or better. Offloading a -- lets call it a -- "significant chunk" over to the GPU (even without using the video decoder junk in the GPU) will take a significant chunk of the processing off the CPU to hopefully make HD playback stable on even older 2-core processors (maybe even 1-core's).

      Now the question you need to ask yourself is this: how much acceleration do you really need? My "tv computer" is an older X2-3800 that I recently picked up for free + an RHD3650 ($40). HD video playback goes like this;
      720P single threaded: fairly OK with the occasional chop. Very watchable.
      720P multi-threaded: perfect.
      1080P single threaded: unwatchable, drops about 50%.
      1080P multi-threaded: fairly OK with the occasional chop. About the same as 720P single threaded.

      So how much acceleration do *I* need on this "$40" computer to make 1080P perfect? The answer is *not much*. And that's on old junk.

      Here's what bridgman has to say about video decode acceleration:
      Technical support and discussion of the open-source AMD Radeon graphics drivers.

      Comment


      • #13
        I think you can take it for granted that video acceleration is coming to open source drivers. While we're not sure yet about UVD, there is already work being done on a shader-based acceleration stack. Cooper will be including ymanton's XvMC-over-Gallium3D code as one of the test cases for the 300g Gallium3D driver, and zrusin is planning to integrate that XvMC code into the xorg state tracker as well. Once XvMC is working all the key bits of plumbing will be there and adding support for additional video standards (or APIs) will not require much in the way of hardware-specific knowledge.

        Even moving MC (the largest consumer of CPU cycles) from CPU to GPU is likely to save enough CPU cycles that one CPU core will be enough for most users.

        Hey, wasn't this thread supposed to be about the new VDPAU wrapper library ?
        Last edited by bridgman; 18 September 2009, 12:18 PM.
        Test signature

        Comment


        • #14
          Originally posted by bridgman View Post
          I think you can take it for granted that video acceleration is coming to open source drivers. While we're not sure yet about UVD, there is already work being done on a shader-based acceleration stack.
          Are you saying we might see UVD, i.e. bitstream acceleration in opensource drivers?
          Originally posted by bridgman View Post
          Cooper has been working on the 300g Gallium3D driver and will be including ymanton's XvMC-over-Gallium3D code as one of the test cases, and zrusin is planning to integrate that XvMC code into the xorg state tracker as well. Once XvMC is working all the key bits of plumbing will be there and adding support for additional video standards (or APIs) will not require much in the way of hardware-specific knowledge.
          Sounds great. I recall there being some limitations on XvMC. Going straight to what I care about and need the stack to provide (note: according to reports on the web, VDPAU with nvidia does this): Postprocessing of the decoded video frames, needed to support current mplayer's implementation of subtitles, OSD, etc. Does XvMC even allow this?
          Originally posted by bridgman View Post
          Even moving MC (the largest consumer of CPU cycles) from CPU to GPU is likely to make the difference between one core not being sufficient to one core being enough for most users.
          The fad now is mobility - how does the power draw compare when using UVD and when using shaders?
          Originally posted by bridgman View Post
          Wasn't this thread supposed to be about NVidia's new wrapper library ?
          Well the library is a wrapper for various implementations and we already know nvidia's implementation (mostly) works. We're just THAT eager to see other implementations, working with hardware unaffected by Bumpgate

          Comment


          • #15
            Originally posted by lbcoder View Post
            I don't believe that that is an entirely accurate statement. There are different levels of video acceleration... the difference is in how much of the decode process is accelerated. Right now we DO have acceleration -- though only very basic Xv. Playing a full-HD video right now *does* peg any CPU that isn't at least a fairly recent 2-core or better. Offloading a -- lets call it a -- "significant chunk" over to the GPU (even without using the video decoder junk in the GPU) will take a significant chunk of the processing off the CPU to hopefully make HD playback stable on even older 2-core processors (maybe even 1-core's).

            Now the question you need to ask yourself is this: how much acceleration do you really need? My "tv computer" is an older X2-3800 that I recently picked up for free + an RHD3650 ($40). HD video playback goes like this;
            720P single threaded: fairly OK with the occasional chop. Very watchable.
            720P multi-threaded: perfect.
            1080P single threaded: unwatchable, drops about 50%.
            1080P multi-threaded: fairly OK with the occasional chop. About the same as 720P single threaded.

            So how much acceleration do *I* need on this "$40" computer to make 1080P perfect? The answer is *not much*. And that's on old junk.

            Here's what bridgman has to say about video decode acceleration:
            http://www.phoronix.com/forums/showp...69&postcount=3
            You make a good point here. We shouldn't spend more than 50 bucks if all you want is to watch HD content.

            I think the problem is with people that spent 150 or more and want to get the most out of their hardware.

            Comment


            • #16
              I replied here to minimize the thread-jacking
              Test signature

              Comment


              • #17
                Originally posted by myxal View Post
                Last time I checked, the documentation released by AMD lacked any info for the video decoder.
                On the other paw developers seem to think we don't need anything more. (at least for some level of video decoding acceleration) They'd be doing it with shaders. Someone just has to write it in.
                Edit: Never mind, didn't read until the end. Apparently bridgman did say this in the other thread.

                Comment


                • #18
                  Well, one question remains: how much of the usual (H.264/VC-1) video decoding pipe can be sensibly accelerated with shaders?
                  Last edited by greg; 19 September 2009, 04:08 PM.

                  Comment


                  • #19
                    Originally posted by greg View Post
                    Well, one question remains: how much of the usual (H.264/VC-1) video decoding pipe can be sensibly accelerated with shaders?
                    I suspect it's safe to assume enough that it shows. Even MC (post-processing which is also afaik part of the pipeline) already drops CPU usage quite a bit, doing decoding with shaders should help even more. Numbers available when someone writes it in.

                    Comment


                    • #20
                      Why use shaders when you got a whole block of the GPU dedicated for H.264 decoding, AMD needs to stop treating Linux users as second class citizens and open up their XvBA api.

                      Comment

                      Working...
                      X