Originally posted by scionicspectre
View Post
Announcement
Collapse
No announcement yet.
Daala: A Next-Generation Video Codec From Xiph
Collapse
X
-
-
Originally posted by silix View Postthe only thing i'm worried about is, what kind of acceptance this can hope for, given that:
-it's not an industry-wide standard (as in, embraced by sw AND appliance vendors)
-it's based on different techniques, thus doesnt rely on the same processing "blocks" (eg DCT) that chips with video decoding capabilities can usually handle
-desktop computing is on the decline, more and more replaced by portable, small device, computing - but hw based video decoding (offloading) matters on those devices...
This new thing coming from Xiph + Mozilla + independent developers (to the level of Jason Garrett-Glaser aka. Dark Shikari with x264 fame) has to happen. I think it has all the odds on it's favor and a great team. I have a lot of respect for Monty. If he says it's gonna happen and already has a proof implementation, then it's gonna happen.
Comment
-
Originally posted by jntesteves View PostXiph actually have a pretty good track record of industry acceptance. Vorbis audio was quickly adopted by chip manufacturers and was on any cheap (generic chinese) mp3 player back in the day. Opus is expected to be the next standard for audio and it's showing immediate adoption.
This new thing coming from Xiph + Mozilla + independent developers (to the level of Jason Garrett-Glaser aka. Dark Shikari with x264 fame) has to happen. I think it has all the odds on it's favor and a great team. I have a lot of respect for Monty. If he says it's gonna happen and already has a proof implementation, then it's gonna happen.
Comment
-
Originally posted by plonoma View Post@silix
About hardware acceleration mattering on mobile.
Most high-end and mid range mobile GPU's are starting to support OpenCL,
which could provide for a good basis for video decoding.
Allowing things like Daala be done good enough (fast enough video decoding for fluent playback) without having to add extra hardware.
Now that being said, this new side-transition may be more suitable for general purpose opencl acceleration. Of course, that's at the expense of the massive power consumption typical of all GPUs.
Comment
-
@droidhacker
Newer graphics cards can do all the heavy lifting.
Search the world's information, including webpages, images, videos and more. Google has many special features to help you find exactly what you're looking for.
OpenCL on the GPU is something that is somewhere in between.
GPU's are made for doing graphical work and are also more efficient when used to decode video.
Not as efficient as an ASIC but much more efficient than using the CPU.
The implementation of GPU encoders and decoders is advancing.
There is a big effort to do more with the GPU nowadays.
Seen the release notes from recent adobe products? Lots of stuff that's moved to the GPU.
Comment
-
Originally posted by plonoma View Post@droidhacker
Newer graphics cards can do all the heavy lifting.
Search the world's information, including webpages, images, videos and more. Google has many special features to help you find exactly what you're looking for.
OpenCL on the GPU is something that is somewhere in between.
GPU's are made for doing graphical work and are also more efficient when used to decode video.
Not as efficient as an ASIC but much more efficient than using the CPU.
The implementation of GPU encoders and decoders is advancing.
There is a big effort to do more with the GPU nowadays.
Seen the release notes from recent adobe products? Lots of stuff that's moved to the GPU.
Comment
-
Originally posted by droidhacker View PostThere has been a lot of talk about this idea over the years, but the problem is that it has NEVER been pushed past PARTIAL and/or THEORETICAL. There was some partial GPU assistance on some older video cards, but all in all, video decoding has always been done either in software or on dedicated hardware.
Now that being said, this new side-transition may be more suitable for general purpose opencl acceleration. Of course, that's at the expense of the massive power consumption typical of all GPUs.
With opencl you should be able to do similar things on linux, I'd imagine, but it just hasn't been done b/c there hasn't been sufficient interest from the right people.
Comment
-
Originally posted by liam View PostLook to the various dxva2 levels (notice AT tests quicksync separately, so dxva2 isn't using the intel provided hardware decoding) and madvr (the original madvr release seems like it was mostly like xvideo, but it seems to offer far more now). Not for linux, but apparently tremendously efficient.
With opencl you should be able to do similar things on linux, I'd imagine, but it just hasn't been done b/c there hasn't been sufficient interest from the right people.
GPU hardware is not h264 decoding friendly, no matter what kind of API like OpenCL you use.
Comment
-
Originally posted by smitty3268 View PostDXVA is the MS equivalent of VDPAU or VAAPI. It's not shader based decoding, beyond the standard post-processing effects.
GPU hardware is not h264 decoding friendly, no matter what kind of API like OpenCL you use.
That link says that it says it can use off-host acceleration of certain parts of a codec, implying that it will accelerate what it can. So it has various entry points, similar to vdpau/vaapi, as you say. So, you can use dxva without targetting dedicated decode hardware. Moreover, from what Bridgman has said, and from processing pipelines i've seen, it seems like the only part of the decoding that can't be handled well on the gpu is the entropy coding (which can be high, admittedly). That is what it seems like is being done in the AT article.
I'd never heard of dxva prior to that article so bear with me if I misunderstand.Last edited by liam; 26 June 2013, 02:59 AM.
Comment
Comment