Announcement

Collapse
No announcement yet.

AMD Is Working On A New VA-API State Tracker For Gallium3D

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by zanny View Post
    Openmax does decode as well, they could implement that fully, but they probably know that nobody is targeting openmax for decode - at the least, there is no gstreamer openmax the way there is gstreamer vaapi (there is, but it is less developed and stable, for example on Arch the vaapi plugins are in the main repos and omx is stuck in the AUR).

    In the end, vdpau is Nvidia, vaapi is Intel, and if they were to all stop being babies they would all use Openmax, or at least standardize on one of the others. At least we have vaapi for vdpau and vice versa libraries. But it is perfectly understandable for the situation to be frustrating to AMD, who have no obvious answer.
    There is gst-omx.

    Comment


    • #12
      Originally posted by phoronix
      It's not clear why AMD is working on this VA-API state tracker when there's already the mature VDPAU state tracker working for the R600 and RadeonSI Gallium3D drivers.
      If I had to guess, these are business and not technical reasons. OpenMAX for decode and encode is already enough, and OpenMAX is by far the most used API thanks to Android.
      VA-API is used on ChromeOS, so maybe it is part of AMD's plan to get into Chromebooks.

      Comment


      • #13
        Here's an idea, maybe AMD wants to support all APIs? And OpenMAX is coming too?

        Comment


        • #14
          Originally posted by xeekei View Post
          Here's an idea, maybe AMD wants to support all APIs? And OpenMAX is coming too?
          Openmax is supposed to work. Havent tried it though.

          Comment


          • #15
            Originally posted by zanny View Post
            Openmax does decode as well, they could implement that fully, but they probably know that nobody is targeting openmax for decode - at the least, there is no gstreamer openmax the way there is gstreamer vaapi (there is, but it is less developed and stable, for example on Arch the vaapi plugins are in the main repos and omx is stuck in the AUR).

            In the end, vdpau is Nvidia, vaapi is Intel, and if they were to all stop being babies they would all use Openmax, or at least standardize on one of the others. At least we have vaapi for vdpau and vice versa libraries. But it is perfectly understandable for the situation to be frustrating to AMD, who have no obvious answer.
            note that: (1) openmax is kind of horrible, and (2) the mesa openmax implementation doesn't seem to support input/output as eglImage (and probably couldn't very easily without introducing a YUV->RGB step, which is avoided with vdpau and I assume vaapi) so not very awesome for integrating with GL rendering.. not sure, for discrete gpu case that might end up forcing transfers vram <-> ram too.

            My guess is simply that AMD wanted a better/more-efficient API supporting encode than openmax.

            Comment


            • #16
              Originally posted by robclark View Post
              My guess is simply that AMD wanted a better/more-efficient API supporting encode than openmax.
              I think they said that VA-API was worse.

              Originally posted by Deathsimple View Post
              Sorry forgotten to explain that. The encoding part of VA-API works with slice level data, but the output of VCE is an elementary stream.

              So to support VA-API the software stack would look something like this:
              1. VCE encodes the frame to an elementary stream
              2. driver decodes the elementary stream back to slice level data
              3. VA-API passes slice level data to application
              4. Application encodes slice level data back to an elementary stream

              That makes no sense on both CPU and implementation overhead and is the main reason why we dropped VA-API support.
              Originally posted by agd5f View Post
              The vaapi encode interface was designed before we started the open source vce project. Why didn't Intel use omx or vdapu or some other existing APIs to begin with? omx is a lot more flexible in being able to support different types of hw. vaapi is very much tied to the way Intel's hw works (on both the encode and decode sides) which makes it a poor fit for other hw.

              Comment


              • #17
                Originally posted by chithanh View Post
                I think they said that VA-API was worse.
                yes, but that is all about copying around / munging encoded bitstream, rather than making copies of the significantly larger unencoded YUV frames.

                Comment


                • #18
                  Originally posted by robclark View Post
                  yes, but that is all about copying around / munging encoded bitstream, rather than making copies of the significantly larger unencoded YUV frames.
                  When you look at the code in the intel drivers staging branch and compare that one with the 1.4pre2 branch I think some big rework will come to VAAPI. There are quite a few limitations in the current api, especially getting access to the real data if you want to render it, let alone copying surfaces which is done with Xlib which needs proper locking and is therefore killing every performance approach.

                  Comment


                  • #19
                    Originally posted by robclark View Post
                    note that: (1) openmax is kind of horrible,
                    Bu ... But its from Khronos and its Open! *scnr*

                    Comment


                    • #20
                      Originally posted by fritsch View Post
                      When you look at the code in the intel drivers staging branch and compare that one with the 1.4pre2 branch I think some big rework will come to VAAPI. There are quite a few limitations in the current api, especially getting access to the real data if you want to render it, let alone copying surfaces which is done with Xlib which needs proper locking and is therefore killing every performance approach.
                      hmm, well I won't claim to have looked at vaapi encoding APIs.. although ideally you get the YUV data into vram as soon as possible and leave it there (passing around handles after that point). Openmax (without eglImage) requires cpu accessible buffer ptrs, which is really the thing you want to avoid as much as possible.

                      Fortunately it is at least easier to fix vaapi API than it is to fix openmax API.

                      Comment

                      Working...
                      X