Announcement
Collapse
No announcement yet.
X.Org SoC: Gallium3D H.264, OpenGL 3.2, GNU/Hurd
Collapse
X
-
OK, I guess it might be interesting to see if anyone is actively profiling that code to see where the CPU time is going, and how much of that time is going to "shader-friendly" tasks like MC and filtering.
-
Originally posted by bridgman View PostThe work over Gallium3D was with MPEG2, using the XvMC API, mostly by Younes Manton on Nouveau :
A blog (mostly) about my programming endeavours. For more info on the things I write about, check out BitBlit.Org.
Cooper then got a good chunk of that code running on the 300g ATI driver before getting dragged off to other projects.
Somewhere in there a video API was defined and at least partially implemented, not exactly sure who did what there.
I don't think we have any good power efficiency numbers yet re: whether CPU or GPU shaders do the offloadable work more efficiently. First priority was offloading enough work to the GPU so that the remainder could be handled by a single CPU thread, since the MT version of the CPU codecs wasn't very mature, and without the ability to use multiple CPU cores anything near 100% of a single core meant frame dropping and other yukkies.
Since then, multithread decoders seem to have become more stable (at least more people seem to be using them), so the pull for GPU decoding has dropped somewhat. I don't know the status of the MT codecs right now, ie whether they are easily accessible to all users or whether they still need a skilled user to build and tweak 'em.
Leave a comment:
-
The work over Gallium3D was with MPEG2, using the XvMC API, mostly by Younes Manton on Nouveau :
A blog (mostly) about my programming endeavours. For more info on the things I write about, check out BitBlit.Org.
Cooper then got a good chunk of that code running on the 300g ATI driver before getting dragged off to other projects.
Somewhere in there a video API was defined and at least partially implemented, not exactly sure who did what there.
I don't think we have any good power efficiency numbers yet re: whether CPU or GPU shaders do the offloadable work more efficiently. First priority was offloading enough work to the GPU so that the remainder could be handled by a single CPU thread, since the MT version of the CPU codecs wasn't very mature, and without the ability to use multiple CPU cores anything near 100% of a single core meant frame dropping and other yukkies.
Since then, multithread decoders seem to have become more stable (at least more people seem to be using them), so the pull for GPU decoding has dropped somewhat. I don't know the status of the MT codecs right now, ie whether they are easily accessible to all users or whether they still need a skilled user to build and tweak 'em.
Leave a comment:
-
I wasn't aware of that.
Originally posted by bridgman View PostI really believe that the "missing link" so far has been someone grafting libavcodec onto the driver stack so that processing can be incrementally moved from CPU to GPU.
Also, I forgot to mention that the other benefit of a shader-based implementation is that there are a lot of cards in use today which have a fair amount of shader power but which do not have dedicated decoder HW (ATI 5xx, for example).
Leave a comment:
-
Guys, isn't it VP8 we're hoping to open up? 7 wouldn't be as useful.
Leave a comment:
-
Originally posted by bridgman View PostOpening up VP7 would sure solve a lot of problems.
Leave a comment:
-
It's still ON2, isn't it ? AFAIK the company was purchased by Google but is still operating as an independent entity so far...
Opening up VP7 would sure solve a lot of problems.
Leave a comment:
-
@bridgman
Now the company is Google I think it is just a matter of time till Googles releases VP7 as open source and use that for youtube. Currently they don't have to pay for h264 content, but i think they want to be prepared to switch.
Leave a comment:
Leave a comment: