Regarding that, I'm still wondering how good that will do with smaller IGPs and the like. I can see it working on bigger chips and even small ones provided it's a dedicated card and has a capable 3D engine. With UVD/UVD2, even slow IGPs can play most stuff out there with very little difficulty, and still manage to not over tax the CPU
Announcement
Collapse
No announcement yet.
UVD/hw acceleration If, when?
Collapse
X
-
Originally posted by droidhacker View PostIt may *seem* important, but it isn't an all-or-nothing situation.
Go back and look at the radeon feature matrix: http://www.x.org/wiki/RadeonFeature
You will note "Video Decode Using the 3D Engine". It may not be able to take over everything, but it should reduce the CPU strain to well within reasonable limits (rather than maxing out any dual-core and dropping frames).
Comment
-
Originally posted by agd5f View Post3D is also the only extensible method. You can't add support for new formats to UVD; it only decodes a fixed set of formats. You can add support for new formats using the 3D engine.
Comment
-
-
Originally posted by brent View PostOriginally posted by garytr24isn't it possible to get hardware assistance for decoding on g33 with gallium and the shader pipeline?
Also, the kind of acceleration you get this way (that is, MC level) is not very useful, especially with H.264 high profile, where acceleration is most needed.
H.264 data in high profile is normally encoded with CABAC. CABAC decoding needs a lot of processing power and cannot be parallelized well, thus it isn't viable for offload to shaders. If you approach high bitrates (like on bluray), if I remember correctly, CABAC usually becomes the most involving decoding step. And you can't speed it up with MC acceleration at all.
Plain and simple, in my opinion, only full (i.e. bitstream/VLD level) acceleration is worth implementing when it comes to H.264.
VC-1 is different, but it's not nearly as hard to decode as H.264 anyway.## VGA ##
AMD: X1950XTX, HD3870, HD5870
Intel: GMA45, HD3000 (Core i5 2500K)
Comment
-
Looks like it. The lack of parallelisability to CABAC is the principal reason it doesn't do well on shaders (which heavily rely on parallelism for performance), so yeah - I doubt the radeon shader core would be much help in decoding CABAC. Could still be of use in the other decoding stages, though.
Comment
-
-
Comment