Originally posted by bridgman
View Post
Announcement
Collapse
No announcement yet.
AMD Releases Open-Source R600/700 3D Code
Collapse
X
-
Yes, I think the code would be largely common across any chip using shaders rather than dedicated hardware. I was thinking of "duplicate" in the sense that this code has already been implemented in a number of GPL-licensed software decoders.Test signature
Comment
-
Yep. The tricky part is that if the "most code-sharing-y approach" also requires the most work to be done before showing any useful results, the choice becomes harder.
The slice level approach also means that the driver devs need to maintain the front end (eg entropy decode) code in each driver whereas if we had a slightly lower level API then that code would be maintained once in the player or whatever decoder sat between the player and the driver API.
In other words, there's a practical difference between "sharing a single copy" and "having one set of code which can more or less be used by a bunch of different drivers, each with their own copy, each being maintained independently by different developers and probably drifting in slightly different directions over time". Going with a slice level API is the second case, unfortunately.Last edited by bridgman; 04 January 2009, 07:00 AM.Test signature
Comment
-
Originally posted by bridgman View PostSo far the second option seems conceptually simpler but more work unless we can borrow the entropy decoding stage from an existing software decoder; that gets tricky because all of the software decoders seem to be GPL or LGPL licensed. You can move code from an MIT-licensed X driver to GPL-licensed software decoder but you can't move from a GPL-licensed SW decoder to an MIT-licensed driver without contacting all the copyright holders and getting their agreement to relicense (which, in practice, rarely seems to happen).
Comment
-
I'm not sure what the current thinking is re: LGPL in xorg drivers but will ask. I guess the best approach would be to make a subset library from the current decoder which only handled the work we did not offload to the GPU then link the binary in - that would also allow multiple drivers to share the same lib.
Interesting idea - thanks !Test signature
Comment
-
bridgman> The only discussion in the thread was about whether it was
bridgman> worth implementing XvMC, which is currently MPEG2-only
Yes
bridgman> whether MPEG2 decoding placed enough load on the system to
bridgman> justify implementing XvMC
Yes
bridgman> I think we all agree that support is meeded for the more demanding
bridgman> formats, particularly H.264. The question *there* is whether that
bridgman> is a higher priority than 3D support, which is what we are working on now.
For Rage, Radeon, FireMV-2D: video decoding 1st, then power management, then 3D
For FirePro-3D, FireGL-3D: 3D 1st, then power management, then video decoding
Have I left out any video chip families?
--------------
smitty3268> I think the FFMPEG devs probably have a better idea about how to
smitty3268> write a codec than AMD does
Wow! Given that ffmpeg core dumps constantly, you must have a *really* low
opinion of AMD.
--------------
bridgman> I just spent another half hour going through [ ... ]
smitty3268> I wouldn't worry about trying to decode that rambling
Obviously bridgman needs rambling decode acceleration. That was
a half hour that could have been spent on XvMC.
Comment
-
Originally posted by Dieter View PostYes
Yes
Originally posted by Dieter View PostFor Rage, Radeon, FireMV-2D: video decoding 1st, then power management, then 3D
For FirePro-3D, FireGL-3D: 3D 1st, then power management, then video decoding
Originally posted by Dieter View PostObviously bridgman needs rambling decode acceleration. That was a half hour that could have been spent on XvMC.Last edited by bridgman; 07 January 2009, 02:05 PM.Test signature
Comment
-
Originally posted by bridgman View PostJust to be clear, we're dealing with finite resources here so the question is not "would it be nice to have MPEG2 accel ?" (even I can answer that one ) it's "should the community work on MPEG2 accel instead of H.264/VC-1 accel ?", ie which should be worked on first ?
Why don't you create a forum poll, so everyone could give it's own opinion about what should be created first?
Comment
-
people dont want a poll this late in the game, they want and NEED a real subset AVC decode and related libray ASAP, perhaps as a tempory stop gap measure until it all settles down later if needs be, PLUS development headers and DOCUMENTATION, and sample full working code showing anyone how to use it ASAP/TODAY.
some basic benchmark code/charts for each proposed code example might be nice to, so you can decide at a glance which routine or usage suits your requirements for code review and insertion into the likes of FFMPEG etc..
its been said that "the API is the least of the problems" and thats true to some degree, but Bridgeman has stated he beleaves theres enough data documentation out there right now.
presumably that means theres enough information right now for someone here ? to take parts of the ATI/AMD API(s) and make an equivenent VDPAU ?
call it an alpha AVIVO.lib for AVC,vc1,Dirac, and even mpeg2 if its only another entry point in the lib API.
remember, the actual ATI hardware can officially decode all but one of these already ,some video Dev here reading must be capable of running up a quick HW assist AVC decode library based on the ATI API in the same vain as VDPAU being used right now in ffmpeg code in a few days and posting it here ?
there MUST be some test API code sat on the ATI/AMD devs PCs Bridgeman is in contact with and can get their permissioon to use and contribute an hour or so for outside use, to learn from, and use in a basic alpha state open avivo libary.
as Prototyped outlined here
"....
In September, ATI released their Catalyst 8.9 driver with X Video Bitstream Acceleration (XvBA) libraries that could be enabled using tools that shipped with the driver, and then last month, the 8.10 driver enabled the UVD2 video acceleration by default.
The unfortunate thing is that they didn't also ship any development headers with the driver, with the result that the binary libraries were available, but there was no SDK or information available to media player developers to actually utilize the libraries. So XvBA currently remains a white elephant.
..."
for PR purposes and to try and make outsiders equate the ATI/AMD library to VDPAU subset library, i think it should be called AVIVO.lib not XvBA , X-Video Bitstream Acceleration (XvBA), were people are confusing it with the old X-Video Motion Compensation (XvMC)
remember also that its 4 months since the library(s) have been available, so alpha/beta test code at the very least must exist on the ATI devs machines to show off this new libray use, but still NO docs are available that i know of, to explain how you might use this library or its official API for hardware assist video decoding etc, WHY AS THAT?
a poll is wasting peoples times, were are these X-Video Motion Compensation (XvMC) docs so FFMPEG people and the like MIGHT stand a chance to get some parity with the current HW assist VDPAU FFMPEG code diffs....
"...
From Wikipedia, the free encyclopedia
(Redirected from XvBA)
Jump to: navigation, search
X-Video Bitstream Acceleration (XvBA), designed by AMD for its ATI Radeon GPU, is an extension of the X video extension (Xv) for the X Window System on Linux operating-systems[1].
XvBA API allows video programs to offload portions of the video decoding process to the GPU video-hardware. Currently, the portions designed to be offloaded by XvBA onto the GPU are motion compensation (mo comp) and inverse discrete cosine transform (iDCT), and VLD (Variable-Length Decoding) for MPEG-2, MPEG-4 AVC (H.264) and VC-1 encoded video.
XvBA is the Linux equivalent of the Microsoft's DirectX Video Acceleration (DxVA) API for Windows.[2]
...
"
seeing as it seems to be the fashion, and the fact ATI/AMD sold them to us as giving you access to some form of hardware assisted video decode/playback etc with a driver update, i have several X1550,HD3650 and looking to get some HD4xxx soon if something HW assisted code comes home sometime seen, or something else to start advocating werever we go....
as it happens, the lads HD3650 has a large 1gig memory on it, i wonder if pre-loading/pipeing some video through a FIFO to the cards internal memory might improve any future HW assisted processing!Last edited by popper; 07 January 2009, 07:00 PM.
Comment
Comment