very well,thank you for your sharing
Announcement
Collapse
No announcement yet.
Bridgman Is No Longer "The AMD Open-Source Guy"
Collapse
X
-
John Bridgman is no longer managing these efforts.
Last edited by Will00ard10; 15 October 2012, 11:45 AM.
Leave a comment:
-
Originally posted by 89c51 View PostThat is not 100% true, cause I've only looked into H264. I'm not deeply enough into webm to actually judge if it's possible to accelerate that efficiently with shaders. The big issue with H264 is it's CAVLC/CABAC coding of the input stream, which in turn is a complete linear process.
All the other stages are more or less doable with shaders, but because they are doable with shaders also mean they are doable with (for example) SSE. So you need to compare the optimal doable CPU based implementation with the overhead of moving all the data bitstream decoding produces GPU ram, setting up everything etc...
Christian.
Veerapan who worked on webm might be able to give us some info.
WebM is similar to H.264 in many ways, and the detokenizing/decompressing of the input stream is a very serial process. Currently, the WebM decoder detokenizes/decompresses each Macroblock immediately before running the subpixel filtering/idct/dequant stages. There has been some work recently on frame-level multithreading, but nothing has landed yet.
The best bet for OpenCL+WebM might be to rewrite the input decompression to run for an entire frame at a time, and then splitting the rest of the processing stages into a separate piece. Someone has been working on this, and has been talking about it on the WebM Devel mailing list. If you can split the serial portion of detokenizing/decompressing off into a separate thread that is only responsible for decompressing the input stream a frame at a time, that could open up some possibilities for task-level parallelism (1 thread to decompress, 1+ to decode), as well as making the CL portion of frame reconstruction simpler.
Leave a comment:
-
Originally posted by blackiwid View Postcould you estimate how much time you would need to invest to get gpu based aka shaderbased accelleration of x264 720p and 1080p accelleration.
for a guy who is able to programm in C but have no experince in kernel or x or driver development?
I would maybe try it if it would have any change of completion without working on it for 6 months full-time. And bridgeman did say its all there to get it running easy.
he never explizitly said easy but if that task would cost 100.000$ manpower-costs it would be a unneeded statement because then it would be clear to anybody that it will not happen ever, if amd does it not by them self.
Some of that time was spent learning OpenCL and video decode algorithms, but I'd still budget a bit of time.
Leave a comment:
-
@Bridgman
Originally posted by olesalscheider View PostThe technical review states that HSA compatible GPUs will support context switching and preemption. I suppose there will be some sort of software scheduler. Or will it be handled completely by hardware?
Given the former case, are there already plans how to implement this in the linux kernel?
Will the HSA capable hardware feature full documentation (apart from UVD)?
Leave a comment:
-
Originally posted by entropy View PostFor whom is interested - there is a new document available covering HSA:
Heterogeneous System Architecture: A Technical Review (PDF)
Leave a comment:
-
The technical review states that HSA compatible GPUs will support context switching and preemption. I suppose there will be some sort of software scheduler. Or will it be handled completely by hardware?
Given the former case, are there already plans how to implement this in the linux kernel?
Leave a comment:
-
Originally posted by blackiwid View Postyes because of that it is not included by the default because today nobody uses mpeg2 anymore... yes there are always some special cases... but its retarded, and even if somebody uses that, the copression of that format is so bad that each retarded netbook can ecode that crap with 10-20% cpu load.
As for mpeg2 not being used any more or for its decode being a cakewalk, I explained in that original message "While MPEG2 decode support might seem rather pedestrian to some, it is beneficial to many others (Example: the ATSC digital TV system uses MPEG2 formatting for channels streams). In addition, users of hardware supported by the r300g and nouveau drivers would also be able to make use of the state trackers." ... plus the obvious benefit to low power but anaemic CPU or embedded devices which happen to have a decent GPU (and which may be supported by a respective OSS driver)
Leave a comment:
-
For whom is interested - there is a new document available covering HSA:
Heterogeneous System Architecture: A Technical Review (PDF)
Leave a comment:
-
Originally posted by agd5f View PostYou need to build vdpau support in mesa (--enable-vdpau configure option). At the moment it only supports MPEG1/2.
Leave a comment:
Leave a comment: