Announcement

Collapse
No announcement yet.

ATI, please release an Open UVD API

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by Qaridarium View Post
    right now bridgman and os team work an an shader based solution for the os driver.. thats a big deal!
    Any source of this?

    I would be really happy but honestly I don't think this is the case. Bridgman only said that they will try their best to be able to give out documentation for uvd for _future_ generation cards. And there is a big difference.

    Probably our best chance is Veerappen who wants to write a vp8 decoder in opencl. However, we don't even have a working opencl state tracker at the moment...

    Comment


    • #32
      Originally posted by HokTar View Post
      Any source of this?

      I would be really happy but honestly I don't think this is the case. Bridgman only said that they will try their best to be able to give out documentation for uvd for _future_ generation cards. And there is a big difference.

      Probably our best chance is Veerappen who wants to write a vp8 decoder in opencl. However, we don't even have a working opencl state tracker at the moment...
      Bridgman did make some remarks to that effect as I recall. I'm not going to bother looking it up and posting a link since it was on this forum -- you can search for it yourself.

      As I understand it, it is an integrated unit, using both shaders as well as some proprietary logic. Fact though, is that many of the older cards did lack the "UVD" unit, and instead did some of the work on the CPU, still with the heavy lifting on the shaders.

      BOTH ARGUMENTS ARE RIGHT AND BOTH ARGUMENTS ARE WRONG. It is somewhere in between!

      Comment


      • #33
        Originally posted by HokTar View Post
        Any source of this? (bridgman & team working on shader based decode)
        Didn't come from me

        A few things :

        - UVD *does* do the decoding work (without using shaders), but it *also* participates in DRM

        - everything *after* decode is done on shaders... colour space conversion, scaling, filtering, other post processing etc...

        - haven't had time / remembered to find out how r600 handles video but believe it runs decode on CPU then uses same post-processing on shaders

        - we are still working towards an XvBA API release for fglrx, schedule TBD but hopefully soon

        - I'm still saying "don't assume we will be able to release UVD programming info" but we are still going to investigate whether it can be done... we've worked through enough of the higher priority tasks that I think investigation will happen in the next ~6 months or so

        - key point is that spending time investigating UVD programming info release means that time is not being spent on releasing other info/code and that other info has much higher probability of success... we might spend a few man-years of effort investigating UVD and conclude that we can't release anything, but the time would still be gone and other potential good things would not get done in the meantime

        - UVD only accelerates specific formats so for things like Theora and VP8 you're still going to want shader-based decode IMO...

        - I still think the best plan is to implement an XvMC-ish state tracker over Gallium3D (but designed for modern video formats, not MPEG2) then modify ffmpeg/libavcodec and others to use that state tracker for MC and deblock filtering... you could do more (IDCT etc.. ) on shaders in principle but IMO you're better off avoiding having to push data back from GPU to CPU so best approach is to pick a point in the decode pipe where everything *after* that point can be done efficiently on shaders...

        - IIRC the only sticky part with modern video formats and that approach was inter-prediction... probably do-able but haven't had time to work out the details

        EDIT - I haven't complained about the 1 minute edit limit for a while, so consider it complained-about

        Comment


        • #34
          From wikipedia, seems pretty correct:

          UVD/UVD+

          Decoding of H.264/AVC, and VC-1 video codecs entirely in hardware. However, video post-processing is passed to the shaders. MPEG-2 decoding is not performed within UVD, but in the shader processors.

          UVD 2 (ati HD4000)

          Full bitstream decoding of H.264/MPEG-4 AVC, VC-1, as well as MPEG2 video streams, and in addition it also supports dual video stream decoding and Picture-in-Picture mode. This makes UVD2 full BD-Live compliant.

          UVD 2.2 (ati HD4000 and HD5000)

          The UVD 2.2 features a re-designed local memory interface and enhances the compatibility with MPEG2/H.264/VC-1 videos.


          Briefing, on cards with UVD chip:

          -Decoding of H.264 and VC-1 doesn't use shaders.
          -Decoding of MPEG2 use shaders on UVD/UVD+, on UVD2 don't use shaders.
          -Post processing is done on shaders.

          Comment


          • #35
            Originally posted by bridgman View Post
            Didn't come from me

            A few things :

            - UVD *does* do the decoding work (without using shaders), but it *also* participates in DRM

            - everything *after* decode is done on shaders... colour space conversion, scaling, filtering, other post processing etc...

            - haven't had time / remembered to find out how r600 handles video but believe it runs decode on CPU then uses same post-processing on shaders

            - we are still working towards an XvBA API release for fglrx, schedule TBD but hopefully soon

            - I'm still saying "don't assume we will be able to release UVD programming info" but we are still going to investigate whether it can be done... we've worked through enough of the higher priority tasks that I think investigation will happen in the next ~6 months or so

            - key point is that spending time investigating UVD programming info release means that time is not being spent on releasing other info/code and that other info has much higher probability of success... we might spend a few man-years of effort investigating UVD and conclude that we can't release anything, but the time would still be gone and other potential good things would not get done in the meantime
            You say that, but i don't believe it. I'm sure the people working on fglrx are working their ass off but lets get realistic. We always have to wait months to get up to date support for the latest stable XServer (the 1.9 version is officially not supported yet but a leaked 10.10 driver is usable enough). The OpenGL performance is close to horrible and is so for months if not years. The point i'm making here is that the fglrz driver is in development for quite some time now but real big and needed improvements are still not found in the release notes. (to name one: Fully working OpenGL 4.0 support.. not even asking 4.1 yet).

            So for the users it seems like not much changes from release to release only small bugs seem to get fixed. If you look at it that way (and how else can i look at it as a user?) then it would be way more beneficial for the users to let you investigate the XvBA API possibilities rather then fixing minor things again since the current fglrx driver state is not optimal but usable for probably all users. What we, the users, need most right now is working accelerated video decoding and i thing that if you put 2 people on it for one week you can have it all sorted out very fast. Then another week to make the API and release it. Things don't need to go as dogslow as they are going now.

            Originally posted by bridgman
            - UVD only accelerates specific formats so for things like Theora and VP8 you're still going to want shader-based decode IMO...

            - I still think the best plan is to implement an XvMC-ish state tracker over Gallium3D (but designed for modern video formats, not MPEG2) then modify ffmpeg/libavcodec and others to use that state tracker for MC and deblock filtering... you could do more (IDCT etc.. ) on shaders in principle but IMO you're better off avoiding having to push data back from GPU to CPU so best approach is to pick a point in the decode pipe where everything *after* that point can be done efficiently on shaders...

            - IIRC the only sticky part with modern video formats and that approach was inter-prediction... probably do-able but haven't had time to work out the details

            EDIT - I haven't complained about the 1 minute edit limit for a while, so consider it complained-about
            Gallium is far fetched since you guys still release fglrx drivers and there is no fully working gallium driver for current cards that is on par with fglrx.

            Comment


            • #36
              Originally posted by markg85 View Post
              You say that, but i don't believe it. I'm sure the people working on fglrx are working their ass off but lets get realistic. We always have to wait months to get up to date support for the latest stable XServer (the 1.9 version is officially not supported yet but a leaked 10.10 driver is usable enough). The OpenGL performance is close to horrible and is so for months if not years. The point i'm making here is that the fglrz driver is in development for quite some time now but real big and needed improvements are still not found in the release notes. (to name one: Fully working OpenGL 4.0 support.. not even asking 4.1 yet).
              OK, I lost you here. I talked about three things -- how the *current* drivers work, status of XvBA API release, and options for the open source drivers. How can you say "you don't believe me" then rant about fglrx ? What do you not believe ?

              Originally posted by markg85 View Post
              So for the users it seems like not much changes from release to release only small bugs seem to get fixed. If you look at it that way (and how else can i look at it as a user?) then it would be way more beneficial for the users to let you investigate the XvBA API possibilities rather then fixing minor things again since the current fglrx driver state is not optimal but usable for probably all users. What we, the users, need most right now is working accelerated video decoding and i thing that if you put 2 people on it for one week you can have it all sorted out very fast. Then another week to make the API and release it. Things don't need to go as dogslow as they are going now.
              I don't really know how to respond to this. If you believe it should only take a week to design an API, implement the associated driver code, run through all the threat/risk analyses to have confidence that releasing the API will not put our DRM implementation on other OSes at risk then there is probably some disconnect in terms of the work required.

              Originally posted by markg85 View Post
              Gallium is far fetched since you guys still release fglrx drivers and there is no fully working gallium driver for current cards that is on par with fglrx.
              Again, I don't understand this at all. There is no external API equivalent to Gallium3D in fglrx, although it does roughly correspond to the internal "hwl" layer in the fglrx OpenGL driver. I think you may be comparing *Mesa* to the fglrx OpenGL driver, not Gallium3D.

              The Gallium3D driver needs to be sufficiently complete to be able to create and load surfaces, run shaders, and do something with the results.

              Comment


              • #37
                Originally posted by bridgman View Post
                OK, I lost you here. I talked about three things -- how the *current* drivers work, status of XvBA API release, and options for the open source drivers. How can you say "you don't believe me" then rant about fglrx ? What do you not believe ?



                I don't really know how to respond to this. If you believe it should only take a week to design an API, implement the associated driver code, run through all the threat/risk analyses to have confidence that releasing the API will not put our DRM implementation on other OSes at risk then there is probably some disconnect in terms of the work required.



                Again, I don't understand this at all. There is no external API equivalent to Gallium3D in fglrx, although it does roughly correspond to the internal "hwl" layer in the fglrx OpenGL driver. I think you may be comparing *Mesa* to the fglrx OpenGL driver, not Gallium3D.

                The Gallium3D driver needs to be sufficiently complete to be able to create and load surfaces, run shaders, and do something with the results.
                Let me put it in numbers then.
                1 person works 40 hours a week and you put 2 people on it. That's 80 working hours in total for one week.

                Yes, i say you can figure out if it's all legal to release XvBA documentation in that time. If it's not possible then it's because of bureaucracy. The time is enough to find it out.

                Yes i also say that if you add another week you can make an API for it as well and release it. 80 hours is a long time for working non-stop on one thing and i know (from my little experience) that most of the work happens in the first few hours and the vast majority of the time is refining your creation.

                If you want it it can happen! I agree that it's optimistic in time and perhaps even tight but not impossible if you just PUT those 2 employees on it full time for 2 weeks in total.

                Note: i do expect those employees have in depth knowledge of the fglrx driver so they know very well where to be to make a public API.


                And i wasn't ranting on fglrx (yet). Just stating my obbservations and facts.

                i wish i had leadership over the fglrx driver then i would certainly give this a try! but then again if i had i would tell all people working on fglrx to drop it and work on gallium instead.

                Comment


                • #38
                  OK, let me try one more time. The release path for XvBA API has nothing to do with fglrx, it is entirely related to protecting the DRM implementation on the drivers for *other* OSes. Knowledge of fglrx is almost totally irrelevent.

                  You didn't answer my question about why "you don't believe" my earlier comments :
                  - UVD *does* do the decoding work (without using shaders), but it *also* participates in DRM

                  - everything *after* decode is done on shaders... colour space conversion, scaling, filtering, other post processing etc...

                  - haven't had time / remembered to find out how r600 handles video but believe it runs decode on CPU then uses same post-processing on shaders

                  - we are still working towards an XvBA API release for fglrx, schedule TBD but hopefully soon

                  - I'm still saying "don't assume we will be able to release UVD programming info" but we are still going to investigate whether it can be done... we've worked through enough of the higher priority tasks that I think investigation will happen in the next ~6 months or so

                  - key point is that spending time investigating UVD programming info release means that time is not being spent on releasing other info/code and that other info has much higher probability of success... we might spend a few man-years of effort investigating UVD and conclude that we can't release anything, but the time would still be gone and other potential good things would not get done in the meantime

                  Comment


                  • #39
                    Originally posted by bridgman View Post
                    OK, let me try one more time. The release path for XvBA API has nothing to do with fglrx, it is entirely related to protecting the DRM implementation on the drivers for *other* OSes. Knowledge of fglrx is almost totally irrelevent.

                    You didn't answer my question about why "you don't believe" my earlier comments :
                    Right, first one little mistake. The part i didn't believe is:

                    Originally posted by bridgman View Post
                    - I'm still saying "don't assume we will be able to release UVD programming info" but we are still going to investigate whether it can be done... we've worked through enough of the higher priority tasks that I think investigation will happen in the next ~6 months or so

                    - key point is that spending time investigating UVD programming info release means that time is not being spent on releasing other info/code and that other info has much higher probability of success... we might spend a few man-years of effort investigating UVD and conclude that we can't release anything, but the time would still be gone and other potential good things would not get done in the meantime
                    and i replied on that part with:
                    You say that, but i don't believe it. I'm sure the people working on fglrx are working their ass off but lets get realistic. We always have to wait months to get up to date support for the latest stable XServer (the 1.9 version is officially not supported yet but a leaked 10.10 driver is usable enough). The OpenGL performance is close to horrible and is so for months if not years. The point i'm making here is that the fglrz driver is in development for quite some time now but real big and needed improvements are still not found in the release notes. (to name one: Fully working OpenGL 4.0 support.. not even asking 4.1 yet).

                    So for the users it seems like not much changes from release to release only small bugs seem to get fixed. If you look at it that way (and how else can i look at it as a user?) then it would be way more beneficial for the users to let you investigate the XvBA API possibilities rather then fixing minor things again since the current fglrx driver state is not optimal but usable for probably all users. What we, the users, need most right now is working accelerated video decoding and i thing that if you put 2 people on it for one week you can have it all sorted out very fast. Then another week to make the API and release it. Things don't need to go as dogslow as they are going now.
                    In the list you said i quoted a little to much but can't edit it due to the 1 minute edit linux..

                    Comment


                    • #40
                      big typo.
                      i meant:

                      In the list you said i quoted a little to much but can't edit it due to the 1 minute edit limit..

                      Comment


                      • #41
                        Originally posted by markg85 View Post
                        What we, the users, need most right now [...]
                        Erm speak for yourself buddy, I'm doing fine with openGL output in Mplayer.
                        Gotta love broad claims...
                        AMD/ATI might as well ask the Mplayer team to rename openGL output "ATI GPU acceleration (Work in Progress)" and be done with it... Like anyone would notice anyway.

                        Comment


                        • #42
                          @bridgman

                          Would it be possible to access xvba with oss driver?

                          Comment


                          • #43
                            /me waits patiently (and thankfully) for the TGSI-based work and suggests everyone chill out and do the same (or use gbeauche's xvba-video if using Catalyst).

                            Comment


                            • #44
                              Originally posted by markg85 View Post
                              Right, first one little mistake. The part i didn't believe is:

                              <comments about UVD programming info>

                              and i replied on that part with:

                              <comments about fglrx progress and how long things should take
                              OK, I think I see the disconnect. Tell me if this makes sense.

                              There are (at least) two different topics under discussion here :

                              - opening the XvBA API implemented by the fglrx driver

                              - providing programming information for an open source UVD driver

                              They are totally different activities - one (XvBA API) started a while ago and is (hopefully) pretty close to being done, the other (UVD programming information) has not started yet and I don't expect it to happen for a while.

                              I was talking about UVD programming information, but I'm wondering if you thought I was talking about fglrx/XvBA when I said "6 months" and that triggered your comments about how long things should take ?

                              Does that make any sense ?

                              Originally posted by Kano View Post
                              @bridgman

                              Would it be possible to access xvba with oss driver?
                              That (running a modified XvBA proprietary driver over the open source stack)is one of the options we are looking at. The problem is that if you run a proprietary user space driver over an open kernel driver you are making reverse engineering so much easier that the risk is not much different from directly releasing UVD programming info.

                              If we finished investigating the release of UVD programming info and concluded that doing so was not *quite* safe but close, then a "UVD driver blob over open source stack" solution might reduce the risk just enough to be viable (depending on what the showstopper risk(s) turned out to be). It's definitely one of the options on the table anyways.

                              Comment


                              • #45
                                Originally posted by bridgman View Post
                                Didn't come from me
                                (...)

                                - UVD only accelerates specific formats so for things like Theora and VP8 you're still going to want shader-based decode IMO...

                                - I still think the best plan is to implement an XvMC-ish state tracker over Gallium3D (but designed for modern video formats, not MPEG2) then modify ffmpeg/libavcodec and others to use that state tracker for MC and deblock filtering... you could do more (IDCT etc.. ) on shaders in principle but IMO you're better off avoiding having to push data back from GPU to CPU so best approach is to pick a point in the decode pipe where everything *after* that point can be done efficiently on shaders...

                                - IIRC the only sticky part with modern video formats and that approach was inter-prediction... probably do-able but haven't had time to work out the details
                                it comes from me! but i can prove it in your writing right here.

                                amd payes you for thinking and you "still think (,,,)--> XvMC (,,,) --> Galium3D (...) --> ffmpeg (...) --> on shaders"

                                but in the past you write to me there is no clue about doing this because no single gpu company does this and without an uvd unit you should do this on the cpu then i remembers you about the r600 hd2900 with no uvd unit and h264 decode on gpu then you got a light in your mind and the thinking about an OS version of that stuff was born.

                                and now the fat brain bridgman thinks about and thats what i talking about amd payes you for thinking so i'm right the AMD OS driver team work on the shader based solution because you think payed by amd about that stuff.

                                its Psychoanalysis thats my skill other humans are great progammers and i'm great on thinking about people people do what they can do.

                                Comment

                                Working...
                                X