Announcement

Collapse
No announcement yet.

Daala: A Next-Generation Video Codec From Xiph

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by scionicspectre View Post
    Doesn't hurt to have an option. I don't think Xiph has ever expected devices to quickly adopt any of their new technology. It'll come around eventually, so long as it's truly an improvement. There's more open software in consumers' homes today than ever before, so there's a chance it will make in-roads someday. Aside from that, it could be interesting to experiment with for the potential applications, so I'd say it's worth the effort.
    The best thing about this is that if the underlying technology behind this ever takes off there will be a significant base of prior art to invalidate all the inevitable patents people will try to claim around it. You can't get them all, but at least there should be a basis for something decent without them.

    Comment


    • #12
      I hope this ends better than Theora did.

      Comment


      • #13
        Originally posted by silix View Post
        the only thing i'm worried about is, what kind of acceptance this can hope for, given that:
        -it's not an industry-wide standard (as in, embraced by sw AND appliance vendors)
        -it's based on different techniques, thus doesnt rely on the same processing "blocks" (eg DCT) that chips with video decoding capabilities can usually handle
        -desktop computing is on the decline, more and more replaced by portable, small device, computing - but hw based video decoding (offloading) matters on those devices...
        Xiph actually have a pretty good track record of industry acceptance. Vorbis audio was quickly adopted by chip manufacturers and was on any cheap (generic chinese) mp3 player back in the day. Opus is expected to be the next standard for audio and it's showing immediate adoption.
        This new thing coming from Xiph + Mozilla + independent developers (to the level of Jason Garrett-Glaser aka. Dark Shikari with x264 fame) has to happen. I think it has all the odds on it's favor and a great team. I have a lot of respect for Monty. If he says it's gonna happen and already has a proof implementation, then it's gonna happen.

        Comment


        • #14
          Originally posted by jntesteves View Post
          Xiph actually have a pretty good track record of industry acceptance. Vorbis audio was quickly adopted by chip manufacturers and was on any cheap (generic chinese) mp3 player back in the day. Opus is expected to be the next standard for audio and it's showing immediate adoption.
          This new thing coming from Xiph + Mozilla + independent developers (to the level of Jason Garrett-Glaser aka. Dark Shikari with x264 fame) has to happen. I think it has all the odds on it's favor and a great team. I have a lot of respect for Monty. If he says it's gonna happen and already has a proof implementation, then it's gonna happen.
          If Jason (and hopefully other x264 devs) then I have even more faith in this project.

          Comment


          • #15
            Originally posted by plonoma View Post
            @silix
            About hardware acceleration mattering on mobile.
            Most high-end and mid range mobile GPU's are starting to support OpenCL,
            which could provide for a good basis for video decoding.
            Allowing things like Daala be done good enough (fast enough video decoding for fluent playback) without having to add extra hardware.
            There has been a lot of talk about this idea over the years, but the problem is that it has NEVER been pushed past PARTIAL and/or THEORETICAL. There was some partial GPU assistance on some older video cards, but all in all, video decoding has always been done either in software or on dedicated hardware.

            Now that being said, this new side-transition may be more suitable for general purpose opencl acceleration. Of course, that's at the expense of the massive power consumption typical of all GPUs.

            Comment


            • #16
              @droidhacker
              Newer graphics cards can do all the heavy lifting.
              Search the world's information, including webpages, images, videos and more. Google has many special features to help you find exactly what you're looking for.


              OpenCL on the GPU is something that is somewhere in between.
              GPU's are made for doing graphical work and are also more efficient when used to decode video.
              Not as efficient as an ASIC but much more efficient than using the CPU.

              The implementation of GPU encoders and decoders is advancing.
              There is a big effort to do more with the GPU nowadays.
              Seen the release notes from recent adobe products? Lots of stuff that's moved to the GPU.

              Comment


              • #17
                Originally posted by plonoma View Post
                @droidhacker
                Newer graphics cards can do all the heavy lifting.
                Search the world's information, including webpages, images, videos and more. Google has many special features to help you find exactly what you're looking for.


                OpenCL on the GPU is something that is somewhere in between.
                GPU's are made for doing graphical work and are also more efficient when used to decode video.
                Not as efficient as an ASIC but much more efficient than using the CPU.

                The implementation of GPU encoders and decoders is advancing.
                There is a big effort to do more with the GPU nowadays.
                Seen the release notes from recent adobe products? Lots of stuff that's moved to the GPU.
                Even more nicely, you can use opengl 4.3 compute shaders to do all the decoding without any of the painful memcopys that plague opencl.

                Comment


                • #18
                  Originally posted by droidhacker View Post
                  There has been a lot of talk about this idea over the years, but the problem is that it has NEVER been pushed past PARTIAL and/or THEORETICAL. There was some partial GPU assistance on some older video cards, but all in all, video decoding has always been done either in software or on dedicated hardware.

                  Now that being said, this new side-transition may be more suitable for general purpose opencl acceleration. Of course, that's at the expense of the massive power consumption typical of all GPUs.
                  Look to the various dxva2 levels (notice AT tests quicksync separately, so dxva2 isn't using the intel provided hardware decoding) and madvr (the original madvr release seems like it was mostly like xvideo, but it seems to offer far more now). Not for linux, but apparently tremendously efficient.


                  With opencl you should be able to do similar things on linux, I'd imagine, but it just hasn't been done b/c there hasn't been sufficient interest from the right people.

                  Comment


                  • #19
                    Originally posted by liam View Post
                    Look to the various dxva2 levels (notice AT tests quicksync separately, so dxva2 isn't using the intel provided hardware decoding) and madvr (the original madvr release seems like it was mostly like xvideo, but it seems to offer far more now). Not for linux, but apparently tremendously efficient.


                    With opencl you should be able to do similar things on linux, I'd imagine, but it just hasn't been done b/c there hasn't been sufficient interest from the right people.
                    DXVA is the MS equivalent of VDPAU or VAAPI. It's not shader based decoding, beyond the standard post-processing effects.

                    GPU hardware is not h264 decoding friendly, no matter what kind of API like OpenCL you use.

                    Comment


                    • #20
                      Originally posted by smitty3268 View Post
                      DXVA is the MS equivalent of VDPAU or VAAPI. It's not shader based decoding, beyond the standard post-processing effects.

                      GPU hardware is not h264 decoding friendly, no matter what kind of API like OpenCL you use.
                      Something doesn't make sense. According to the link, they were using dxva2 (with two different rendering options) with haswell. Three test variations were made, and two with dxva2, and one using quicksync. Since qs is how you accelerate video on intel, what was dxva2 using when it wasn't using quicksync.

                      That link says that it says it can use off-host acceleration of certain parts of a codec, implying that it will accelerate what it can. So it has various entry points, similar to vdpau/vaapi, as you say. So, you can use dxva without targetting dedicated decode hardware. Moreover, from what Bridgman has said, and from processing pipelines i've seen, it seems like the only part of the decoding that can't be handled well on the gpu is the entropy coding (which can be high, admittedly). That is what it seems like is being done in the AT article.

                      I'd never heard of dxva prior to that article so bear with me if I misunderstand.
                      Last edited by liam; 26 June 2013, 02:59 AM.

                      Comment

                      Working...
                      X