Announcement

Collapse
No announcement yet.

DAV1D vs. LIBGAV1 Performance - Benchmarking Google's New AV1 Video Decoder

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • brent
    replied
    Competition is good, and I really like how AV1 has a much broader and more competitive software landscape, compared to VP9. However, libgav1 is so much behind that you have to wonder "what's the point?". Maybe it wasn't such a good idea to release it to the public so early.

    Leave a comment:


  • edwaleni
    replied
    This is just Google's first public release. I wouldn't get all stirred up until they start putting CPU specific enhancements in it. AVX, VSX etc. Until then its a WIP.

    Leave a comment:


  • tildearrow
    replied
    I would love to have seen rav1e included in the benchmark, it is written in Rust so should be a safer way to decode AV1. You wouldn't want the decoder to be vulnerable when parsing arbitrary random files from unknown and non-trusted sources.
    rav1e is an encoder.

    And even if it was a decoder, I'd prefer to be able to watch 4K60 AV1 video at full speed than slow-motion.
    Last edited by tildearrow; 06 October 2019, 12:42 PM.

    Leave a comment:


  • _Alex_
    replied
    Seeing the speed difference between 3600 and 3900, it's like there is zero gain from extra threads and just a slight bump from the frequency increase. And with the 9900k being top, this smells something like ...single threaded performance

    Leave a comment:


  • Michael
    replied
    Originally posted by uid313 View Post
    I would love to have seen rav1e included in the benchmark, it is written in Rust so should be a safer way to decode AV1. You wouldn't want the decoder to be vulnerable when parsing arbitrary random files from unknown and non-trusted sources.
    At last check, rav1e wasn't even multi-threaded. And it's an encoder, compared to these dav1d/libgav1 benchmarks being DEcoding.

    Leave a comment:


  • uid313
    replied
    I would love to have seen rav1e included in the benchmark, it is written in Rust so should be a safer way to decode AV1. You wouldn't want the decoder to be vulnerable when parsing arbitrary random files from unknown and non-trusted sources.

    Leave a comment:


  • latalante
    replied
    Originally posted by bug77 View Post
    I opened this thinking "who cares about decoders, encoders is where it's at".
    I care a lot. I hope that I will live to see times when encoders will be free from patents and license fees.

    The most interested are those under the MPEG LA sign (Cisco, Apple, M$) still doing everything not to lose their income from tribute from h264 and especially h265 (from every smartphone, multimedia device, cameras, etc.).

    That's why there is so much confusion here with every note about AV1.

    Leave a comment:


  • pal666
    replied
    Originally posted by bug77 View Post
    I opened this thinking "who cares about decoders, encoders is where it's at".
    i never meet encoder during youtube browsing

    Leave a comment:


  • pal666
    replied
    Originally posted by sturmen View Post
    I, for one, welcome increased "competition" in the space since I think different projects using different approaches can challenge as well as teach. I wonder what inspired Google to invest engineers' time into this. At first I thought it may be licensing, but dav1d is BSD licensed (not copyleft, business-friendly). Maybe Google just has engineers who think they can do better than dav1d and what we've seen so far all foundational.
    chrome just forks every dependency. here they decided to skip that step and just write it from scratch

    Leave a comment:


  • coder
    replied
    Originally posted by archsway View Post
    Focused on Android, eh?

    Is that because they know they'll never compete with dav1d anywhere else?

    On an armv7 box, I got
    Neither of those are nearly good enough for targeting ARMv7. I'd hazard a guess that they don't plan to support AV1 on it.

    Anyway, thanks for the data. Now, it would be nice to see it on ARMv8 (anybody?). Too bad we can't use Pi v4 (at least, not on Raspbian, which still runs in ARMv7 mode).

    Originally posted by archsway View Post
    Neither decoder pegged the CPU at 100%, though I only have 4 cores, so could they be memory bound or something?
    Even a memory-bound load will show up as utilizing 100% of a core, in a utility like top. That's just showing the duty-cycle from the kernel's perspective, which only cares about scheduling tasks. At that granularity, tasks are only blocked on things like I/O and synchronization with other tasks. You would have to use some some more specialized code optimization tools to find out how much time the cores spend stalled on memory accesses.

    So, either it's blocked on I/O (like file, network, devices, etc.), the clock (but probably not, in this case), or synchronization primitives (like mutexes, condition variables, and the like). My guess is the latter case, which can result from how well-threaded the decoder is. It might be that the codec just has too many serial dependencies and isn't very amenable to threading, or just that they still have a lot of work to do in that area.

    Leave a comment:

Working...
X