VP6 is the most widespread so that too.
Announcement
Collapse
No announcement yet.
X.Org SoC: Gallium3D H.264, OpenGL 3.2, GNU/Hurd
Collapse
X
-
I wasn't aware of that.
Originally posted by bridgman View PostI really believe that the "missing link" so far has been someone grafting libavcodec onto the driver stack so that processing can be incrementally moved from CPU to GPU.
Also, I forgot to mention that the other benefit of a shader-based implementation is that there are a lot of cards in use today which have a fair amount of shader power but which do not have dedicated decoder HW (ATI 5xx, for example).
Comment
-
The work over Gallium3D was with MPEG2, using the XvMC API, mostly by Younes Manton on Nouveau :
A blog (mostly) about my programming endeavours. For more info on the things I write about, check out BitBlit.Org.
Cooper then got a good chunk of that code running on the 300g ATI driver before getting dragged off to other projects.
Somewhere in there a video API was defined and at least partially implemented, not exactly sure who did what there.
I don't think we have any good power efficiency numbers yet re: whether CPU or GPU shaders do the offloadable work more efficiently. First priority was offloading enough work to the GPU so that the remainder could be handled by a single CPU thread, since the MT version of the CPU codecs wasn't very mature, and without the ability to use multiple CPU cores anything near 100% of a single core meant frame dropping and other yukkies.
Since then, multithread decoders seem to have become more stable (at least more people seem to be using them), so the pull for GPU decoding has dropped somewhat. I don't know the status of the MT codecs right now, ie whether they are easily accessible to all users or whether they still need a skilled user to build and tweak 'em.Test signature
Comment
-
Originally posted by bridgman View PostThe work over Gallium3D was with MPEG2, using the XvMC API, mostly by Younes Manton on Nouveau :
A blog (mostly) about my programming endeavours. For more info on the things I write about, check out BitBlit.Org.
Cooper then got a good chunk of that code running on the 300g ATI driver before getting dragged off to other projects.
Somewhere in there a video API was defined and at least partially implemented, not exactly sure who did what there.
I don't think we have any good power efficiency numbers yet re: whether CPU or GPU shaders do the offloadable work more efficiently. First priority was offloading enough work to the GPU so that the remainder could be handled by a single CPU thread, since the MT version of the CPU codecs wasn't very mature, and without the ability to use multiple CPU cores anything near 100% of a single core meant frame dropping and other yukkies.
Since then, multithread decoders seem to have become more stable (at least more people seem to be using them), so the pull for GPU decoding has dropped somewhat. I don't know the status of the MT codecs right now, ie whether they are easily accessible to all users or whether they still need a skilled user to build and tweak 'em.
Comment
Comment