Announcement

Collapse
No announcement yet.

DAV1D vs. LIBGAV1 Performance - Benchmarking Google's New AV1 Video Decoder

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by geearf View Post
    What did you expect it to bring to the table so early? Even more when it says it's currently optimized only for Android and it's not being tested that way...
    Benchmarking is what Phoronix does, and it's what a lot of us want. Even if the result is poor, it just sets up Google to have a lot of "wins", with successive rounds of optimization. Then, we can all be like "hey, remember when that thing first launched and we all laughed and kicked dirt at it?"

    IMO, there's nothing bad about testing it out, so long as we all know that it's new and presumably quite immature.

    Comment


    • #22
      Originally posted by NateHubbard View Post

      I guess I wouldn't know unless someone tested it. Since Michael did, now I don't have to.
      How would you not know what you expected?
      Also, how does this current comparison help you foresee the state of these 2 in a few years, or maybe just months, when gav1 will be more ready?

      Originally posted by coder View Post
      Benchmarking is what Phoronix does, and it's what a lot of us want. Even if the result is poor, it just sets up Google to have a lot of "wins", with successive rounds of optimization. Then, we can all be like "hey, remember when that thing first launched and we all laughed and kicked dirt at it?"

      IMO, there's nothing bad about testing it out, so long as we all know that it's new and presumably quite immature.
      I agree, I am not blaming Phoronix for doing its job, it should do what people want/expect.
      But I do think there is something wrong with people using this (very) early (and on the wrong platform) comparison to predict the usability/need of the software.

      Comment


      • #23
        Originally posted by geearf View Post

        How would you not know what you expected?
        Also, how does this current comparison help you foresee the state of these 2 in a few years, or maybe just months, when gav1 will be more ready?
        Obviously I knew it was an AV1 decoder. Look, you asked what I expected it to bring to the table. I answered that I didn't really know, so I read the article because I wanted to fine out. That's reasonable.

        You're just going on and on and I don't actually care anymore.
        Please stop quoting me. This is dumb.

        Comment


        • #24
          As you wish.

          Comment


          • #25
            Focused on Android, eh?

            Is that because they know they'll never compete with dav1d anywhere else?

            On an armv7 box, I got

            dav1d: 18 fps
            gav1: 7 fps
            for "Summer Nature 1080p" which is much closer than the benchmarks on amd64 given here - and I had to disable NEON for libgav1 as it wouldn't compile with it enabled (did they only test on ARMv8?).

            Neither decoder pegged the CPU at 100%, though I only have 4 cores, so could they be memory bound or something?

            Comment


            • #26
              Originally posted by coder View Post
              The output is fully specified by the standard. There should be no difference. Any changes introduced during decoding will accumulate and lead to picture corruption.

              That said, of course you can do post-processing. However, I don't expect that should be built into the decoder libraries - and certainly not enabled by default.



              It wouldn't hurt, though all you'd be doing is basically checking for bugs.

              Now, encoding quality is a different matter, entirely.
              From skimming through the code I saw functions for Edge Detection, Convolution and Film Grain which all sounds to me like Post Processing.
              Hence, we need to make sure indeed both decoders are doing same thing when comparing them.

              Comment


              • #27
                Originally posted by Royi View Post
                From skimming through the code I saw functions for Edge Detection, Convolution and Film Grain which all sounds to me like Post Processing.
                Hence, we need to make sure indeed both decoders are doing same thing when comparing them.
                Oh wow, disabling the filters made it almost twice as fast for me, with [almost] no visible changes for the "Summer Nature 1080p" clip!

                That's without any vectorisation code compiled in, as it caused compilation errors for me - the impact will be different when SSE is enabled.

                Comment


                • #28
                  Originally posted by archsway View Post
                  Focused on Android, eh?

                  Is that because they know they'll never compete with dav1d anywhere else?

                  On an armv7 box, I got
                  Neither of those are nearly good enough for targeting ARMv7. I'd hazard a guess that they don't plan to support AV1 on it.

                  Anyway, thanks for the data. Now, it would be nice to see it on ARMv8 (anybody?). Too bad we can't use Pi v4 (at least, not on Raspbian, which still runs in ARMv7 mode).

                  Originally posted by archsway View Post
                  Neither decoder pegged the CPU at 100%, though I only have 4 cores, so could they be memory bound or something?
                  Even a memory-bound load will show up as utilizing 100% of a core, in a utility like top. That's just showing the duty-cycle from the kernel's perspective, which only cares about scheduling tasks. At that granularity, tasks are only blocked on things like I/O and synchronization with other tasks. You would have to use some some more specialized code optimization tools to find out how much time the cores spend stalled on memory accesses.

                  So, either it's blocked on I/O (like file, network, devices, etc.), the clock (but probably not, in this case), or synchronization primitives (like mutexes, condition variables, and the like). My guess is the latter case, which can result from how well-threaded the decoder is. It might be that the codec just has too many serial dependencies and isn't very amenable to threading, or just that they still have a lot of work to do in that area.

                  Comment


                  • #29
                    Originally posted by sturmen View Post
                    I, for one, welcome increased "competition" in the space since I think different projects using different approaches can challenge as well as teach. I wonder what inspired Google to invest engineers' time into this. At first I thought it may be licensing, but dav1d is BSD licensed (not copyleft, business-friendly). Maybe Google just has engineers who think they can do better than dav1d and what we've seen so far all foundational.
                    chrome just forks every dependency. here they decided to skip that step and just write it from scratch

                    Comment


                    • #30
                      Originally posted by bug77 View Post
                      I opened this thinking "who cares about decoders, encoders is where it's at".
                      i never meet encoder during youtube browsing

                      Comment

                      Working...
                      X