Yeah, I think that would be accelerating the parts which are a good fit with shader processing. The nice things about going with shaders are (a) to a large extent the same code can run on hardware from multiple vendors and generations, (b) the same framework can be used for codecs such as Theora which do not have support in the dedicated decoder hardware anyways. I think ON6 (Flash) falls into this category as well but not 100% sure.
Announcement
Collapse
No announcement yet.
X.Org SoC: Gallium3D H.264, OpenGL 3.2, GNU/Hurd
Collapse
X
-
Thanks, Bridgman!
Originally posted by bridgman View PostYeah, I think that would be accelerating the parts which are a good fit with shader processing. The nice things about going with shaders are (a) to a large extent the same code can run on hardware from multiple vendors and generations, (b) the same framework can be used for codecs such as Theora which do not have support in the dedicated decoder hardware anyways. I think ON6 (Flash) falls into this category as well but not 100% sure.
Regardless, this will be a nice thing to have: a fairly general purpose, block based graphics accelerator. I really hope the work they do is general enough that it can be applied to other DCT codecs.
Best/Liam
Comment
-
I really believe that the "missing link" so far has been someone grafting libavcodec onto the driver stack so that processing can be incrementally moved from CPU to GPU.
Also, I forgot to mention that the other benefit of a shader-based implementation is that there are a lot of cards in use today which have a fair amount of shader power but which do not have dedicated decoder HW (ATI 5xx, for example).Test signature
Comment
-
I actually think that this shader-based approach is much more important than using the dedicated video decoder. Specifically because it IS compatible with cards without a dedicated video decoder AND that it is applicable beyond the capabilities of the dedicated video decoder.
In addition, there won't be any need to deal with IP issues in the event that the video decoder components on future cards are just totally incompatible with older versions... so one less item of critical importance when dealing with new hardware support.
Here's a question: on the SoC website, it says that CABAC is not suitable for GPU-acceleration, and on wikipedia it says that CABAC is horribly CPU intensive.... Does this mean that high bitrate videos that use CABAC will be beyond our decoding capability for lower-end machines? Any idea what proportion of the overall video decoding process (in a typical software decoder) would be related to CABAC? Is it going to be something manageable, like 5%, or is it going to be something overwhelming, like 95%?
Comment
-
Originally posted by bridgman View PostOpening up VP7 would sure solve a lot of problems.
Comment
Comment