Announcement

Collapse
No announcement yet.

A few questions about video decode acceleration

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • bridgman
    replied
    Not much really; synchronization between the IDCT and MC functions changed in R300, rest of the changes were pretty minor. I expect the info we release will enable right back to RV100 aka Radeon 7000.

    The biggest demand for IDCT/MC is still coming from 7000 owners and embedded HW designers who used 7000, which makes sense I guess.

    It's really just the IDCT block that still needs docs; MC just uses special modes in the 3d engine and that info is already out for 5xx. AFAIK the XvMC API supports MC-only acceleration so someone could start on that now if they had time. MC is still the most computationally expensive stage in the pipe, or at least it is for MPEG2.
    Last edited by bridgman; 07-16-2008, 12:46 PM.

    Leave a comment:


  • curaga
    replied
    Bridgman, how much has your acceleration block changed over generations? Would it be safe for your current DRM to tell how to use R200/R100 video decoding blocks?

    Leave a comment:


  • bridgman
    replied
    Alex, there are no "secret agreements not to expose certain HW functionality". There are "non-secret" agreements that if we offer API support for certified media players we will ensure a certain level of robustness for the associated protection mechanisms. There is also the "non-secret" reality that if we don't offer API support for certified players then we can't sell our chips to major OEMs, which would be spectacularly bad for business.

    If we can find ways to expose HW acceleration for open source driver development without putting the implementations on other OSes at risk then that is fine. Right now I am reasonably sure we will be able to do this for the IDCT/MC hardware but not so sure about UVD yet so am saying "no unless you hear otherwise".

    Until we have 6xx/7xx 3d engine support up and running this is all academic since the first requirement is getting the back end (render) acceleration in place and working well.

    Leave a comment:


  • Alex W. Jackson
    replied
    Originally posted by glisse View Post
    This why there are discussion (i think i heard things ) about adding special infrastructure in pipe to be able to take advantage of such hw instead of trying to do it in shader. I am convinced that for decode gpu is not the best solution, dedicated hw is. Shader will be the default safe path, i think...
    But what good does that do when the HW makers' contract terms with the "content protection" cartels and with Microsoft apparently explicitly forbid them from enabling their video-dedicated hardware to be used with free software?

    (And has anyone discussed the antitrust implications of Microsoft and the HW makers signing secret agreements that say certain HW functionality shall be off-limits to Microsoft's chief competitors?)

    Leave a comment:


  • glisse
    replied
    Originally posted by mtippett View Post
    The Proprietary driver has used shader based video (only colorspace conversion/attributes) since R5xx, and shader based 2D since R6xx, and has some people have been playing with RENDER acceleration (TexturedVideo, Textured2D and TexturedXRender). It does work, but it does present a number of technical issues that need to be resolved, and realistically we focus a lot more on 2D acceleration than video.

    This is exactly why UVD hardware was created. It is dedicated hardware that is consistent across all market segments and is built to do only video decode.
    This why there are discussion (i think i heard things ) about adding special infrastructure in pipe to be able to take advantage of such hw instead of trying to do it in shader. I am convinced that for decode gpu is not the best solution, dedicated hw is. Shader will be the default safe path, i think...

    Leave a comment:


  • mtippett
    replied
    Originally posted by chaos386 View Post
    This may be coming a little out of left field here, but what about using CAL/GPGPU? Would it be a reasonable way to avoid the legal issues surrounding the UVD, or would it be so much slower that it wouldn't be worth the effort?

    In any case, using the GPU to accelerate video encoding would be a dream come true, and if that gets done, how much more work would decoding be?
    Video encoding is probably best suited to OpenCL (I will use that term instead of CAL/OpenCL/CUDA/Shader/whatever), since in general that is done away from the consumers desktop.

    The OpenCL based solutions scale their performance with the number of shader cores that are available. This means the effectiveness will scale with hardware.

    The Proprietary driver has used shader based video (only colorspace conversion/attributes) since R5xx, and shader based 2D since R6xx, and has some people have been playing with RENDER acceleration (TexturedVideo, Textured2D and TexturedXRender). It does work, but it does present a number of technical issues that need to be resolved, and realistically we focus a lot more on 2D acceleration than video.

    I am absolutely in support of the GP-GPU approaches to computing problems (including xvmc/h264 decode), but just be aware that the performance of those solutions will scale with hardware.

    Regards,

    Matthew
    Last edited by mtippett; 07-16-2008, 03:38 PM.

    Leave a comment:


  • mtippett
    replied
    Originally posted by TechMage89 View Post
    Windows drivers are kind of messy because a large amount of the code goes in-kernel. Hopefully, Windows 7 should fix that, but you have to expect that the Windows graphical system will be less stable than the Linux graphical system.
    Do you think that the move to Kernel Modesetting is going to lighten the load on the Linux kernel? We are actively pushing more into the kernel while Windows is trying to pull things out. There are religious wars over drivers in userland/kernel so let's avoid it here....

    When kernel modesetting kicks in, the X developers, FB developers and other random display developers are all going to have to find a way to support the same OSS driver. If we end up with 3 kernel based graphics drivers, then I don't believe we have improved the situation.

    Regards,

    Matthew

    Leave a comment:


  • bridgman
    replied
    Two parts to the answer :

    1. There are some pre-requisites which need to be at least partially in place for Gallium. A modern memory manager in drm is the first priority (Dave Airlie is working on that) and I believe DRI2 will be needed as well -- and DRI2 also needs the new memory manager.

    2. In terms of development priorities, we are aiming to get at least basic 3d functionality running on "classic mesa" before beginning implementation on Gallium, so that developers are only faced with the challenge of learning the chips and not learning a new environment & API at the same time. This definitely made sense for 5xx but 6xx/7xx will probably be the last generation where we get things running on "classic Mesa" before Gallium.

    Documentation and community knowledge for 5xx and earlier is good enough to support Gallium work today; glisse did some preliminary work a few months back, and MostAwesomeDude is eyeing Gallium now. I expect that work on both DRI2 and Gallium will ramp up over the next month or so as (a) we make more progress on 6xx/7xx 3d and (b) Dave's work on a new memory manager gets a bit further along.

    Memory manager => DRI2 => Gallium

    Memory Manager => Kernel Modesetting

    Work on the above initiatives can start in parallel to a certain extent, but they really need to "come up" in the order above and certainly need to finish in the order above.

    So... real soon now

    Leave a comment:


  • Dieter
    replied
    > The gallium frontend to XvMC, I guess

    Assuming the SoC project goes well.

    But... then we need gallium for ATI. Is anyone working on that?

    Leave a comment:


  • RoboJ1M
    replied
    Originally posted by Dieter View Post
    > Good *GOD* what a rush.
    >
    > Open source driver implementing MPEG2 and h264 acceleration?

    Huh? I can't find an announcement anywhere.

    Great news if it is true.
    Sorry, I meant the *idea* of it.
    And even if it's not been announced, it's been talked about by the right people in the positive.
    I've always been planning to get a 780g/4850e machine for the living room (mythtv) but I'm going to BOUNCE UP AND DOWN ON MY SOFA HOPING FOR UVD2 SUPPORT!!! *D

    We're no longer standing still or moving in the wrong direction.
    For so long it's all been held back by companies keeping the specs for their hardware closed.
    Now, hopefully, with a few shining examples of how it's good PR and makes financial sense, everybody else will follow suit.

    'lest they fall behind with a Mineshaft Gap.

    And even if it never comes to pass, ATI are aiming for feature parity with Windows with fglrx, yes? Doesn't that mean UVD? Right? DRM? (As in digital rights, not the other DRM)

    I'm just sitting here hoping and being excited about all this.
    Probably too excited but I just don't care.

    J1M.

    Leave a comment:

Working...
X