Originally posted by zanny
View Post
Announcement
Collapse
No announcement yet.
AMD Is Working On A New VA-API State Tracker For Gallium3D
Collapse
X
-
Originally posted by phoronixIt's not clear why AMD is working on this VA-API state tracker when there's already the mature VDPAU state tracker working for the R600 and RadeonSI Gallium3D drivers.
VA-API is used on ChromeOS, so maybe it is part of AMD's plan to get into Chromebooks.
Comment
-
Originally posted by zanny View PostOpenmax does decode as well, they could implement that fully, but they probably know that nobody is targeting openmax for decode - at the least, there is no gstreamer openmax the way there is gstreamer vaapi (there is, but it is less developed and stable, for example on Arch the vaapi plugins are in the main repos and omx is stuck in the AUR).
In the end, vdpau is Nvidia, vaapi is Intel, and if they were to all stop being babies they would all use Openmax, or at least standardize on one of the others. At least we have vaapi for vdpau and vice versa libraries. But it is perfectly understandable for the situation to be frustrating to AMD, who have no obvious answer.
My guess is simply that AMD wanted a better/more-efficient API supporting encode than openmax.
Comment
-
Originally posted by robclark View PostMy guess is simply that AMD wanted a better/more-efficient API supporting encode than openmax.
Originally posted by Deathsimple View PostSorry forgotten to explain that. The encoding part of VA-API works with slice level data, but the output of VCE is an elementary stream.
So to support VA-API the software stack would look something like this:
1. VCE encodes the frame to an elementary stream
2. driver decodes the elementary stream back to slice level data
3. VA-API passes slice level data to application
4. Application encodes slice level data back to an elementary stream
That makes no sense on both CPU and implementation overhead and is the main reason why we dropped VA-API support.Originally posted by agd5f View PostThe vaapi encode interface was designed before we started the open source vce project. Why didn't Intel use omx or vdapu or some other existing APIs to begin with? omx is a lot more flexible in being able to support different types of hw. vaapi is very much tied to the way Intel's hw works (on both the encode and decode sides) which makes it a poor fit for other hw.
Comment
-
Originally posted by robclark View Postyes, but that is all about copying around / munging encoded bitstream, rather than making copies of the significantly larger unencoded YUV frames.
Comment
-
Originally posted by fritsch View PostWhen you look at the code in the intel drivers staging branch and compare that one with the 1.4pre2 branch I think some big rework will come to VAAPI. There are quite a few limitations in the current api, especially getting access to the real data if you want to render it, let alone copying surfaces which is done with Xlib which needs proper locking and is therefore killing every performance approach.
Fortunately it is at least easier to fix vaapi API than it is to fix openmax API.
Comment
Comment