Announcement

Collapse
No announcement yet.

DAV1D vs. LIBGAV1 Performance - Benchmarking Google's New AV1 Video Decoder

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • archsway
    replied
    Originally posted by Royi View Post
    From skimming through the code I saw functions for Edge Detection, Convolution and Film Grain which all sounds to me like Post Processing.
    Hence, we need to make sure indeed both decoders are doing same thing when comparing them.
    Oh wow, disabling the filters made it almost twice as fast for me, with [almost] no visible changes for the "Summer Nature 1080p" clip!

    That's without any vectorisation code compiled in, as it caused compilation errors for me - the impact will be different when SSE is enabled.

    Leave a comment:


  • Royi
    replied
    Originally posted by coder View Post
    The output is fully specified by the standard. There should be no difference. Any changes introduced during decoding will accumulate and lead to picture corruption.

    That said, of course you can do post-processing. However, I don't expect that should be built into the decoder libraries - and certainly not enabled by default.



    It wouldn't hurt, though all you'd be doing is basically checking for bugs.

    Now, encoding quality is a different matter, entirely.
    From skimming through the code I saw functions for Edge Detection, Convolution and Film Grain which all sounds to me like Post Processing.
    Hence, we need to make sure indeed both decoders are doing same thing when comparing them.

    Leave a comment:


  • archsway
    replied
    Focused on Android, eh?

    Is that because they know they'll never compete with dav1d anywhere else?

    On an armv7 box, I got

    dav1d: 18 fps
    gav1: 7 fps
    for "Summer Nature 1080p" which is much closer than the benchmarks on amd64 given here - and I had to disable NEON for libgav1 as it wouldn't compile with it enabled (did they only test on ARMv8?).

    Neither decoder pegged the CPU at 100%, though I only have 4 cores, so could they be memory bound or something?

    Leave a comment:


  • geearf
    replied
    As you wish.

    Leave a comment:


  • NateHubbard
    replied
    Originally posted by geearf View Post

    How would you not know what you expected?
    Also, how does this current comparison help you foresee the state of these 2 in a few years, or maybe just months, when gav1 will be more ready?
    Obviously I knew it was an AV1 decoder. Look, you asked what I expected it to bring to the table. I answered that I didn't really know, so I read the article because I wanted to fine out. That's reasonable.

    You're just going on and on and I don't actually care anymore.
    Please stop quoting me. This is dumb.

    Leave a comment:


  • geearf
    replied
    Originally posted by NateHubbard View Post

    I guess I wouldn't know unless someone tested it. Since Michael did, now I don't have to.
    How would you not know what you expected?
    Also, how does this current comparison help you foresee the state of these 2 in a few years, or maybe just months, when gav1 will be more ready?

    Originally posted by coder View Post
    Benchmarking is what Phoronix does, and it's what a lot of us want. Even if the result is poor, it just sets up Google to have a lot of "wins", with successive rounds of optimization. Then, we can all be like "hey, remember when that thing first launched and we all laughed and kicked dirt at it?"

    IMO, there's nothing bad about testing it out, so long as we all know that it's new and presumably quite immature.
    I agree, I am not blaming Phoronix for doing its job, it should do what people want/expect.
    But I do think there is something wrong with people using this (very) early (and on the wrong platform) comparison to predict the usability/need of the software.

    Leave a comment:


  • coder
    replied
    Originally posted by geearf View Post
    What did you expect it to bring to the table so early? Even more when it says it's currently optimized only for Android and it's not being tested that way...
    Benchmarking is what Phoronix does, and it's what a lot of us want. Even if the result is poor, it just sets up Google to have a lot of "wins", with successive rounds of optimization. Then, we can all be like "hey, remember when that thing first launched and we all laughed and kicked dirt at it?"

    IMO, there's nothing bad about testing it out, so long as we all know that it's new and presumably quite immature.

    Leave a comment:


  • NateHubbard
    replied
    Originally posted by geearf View Post
    What did you expect it to bring to the table so early? Even more when it says it's currently optimized only for Android and it's not being tested that way...
    I guess I wouldn't know unless someone tested it. Since Michael did, now I don't have to.

    Leave a comment:


  • geearf
    replied
    Originally posted by NateHubbard View Post

    So a brand new thing is out there, and you just want to ignore it? I assume all of us, including Michael, wanted to know what it brought to the table and that's why we read this article.
    What did you expect it to bring to the table so early? Even more when it says it's currently optimized only for Android and it's not being tested that way...

    Leave a comment:


  • pyler
    replied
    And I dont think google compiles this new av1 decoder with gcc.. Probably they have fast path for clang only..

    Leave a comment:

Working...
X