Announcement

Collapse
No announcement yet.

SVT-AV1 0.5 Released As Intel's Speedy AV1 Video Encoder

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • SVT-AV1 0.5 Released As Intel's Speedy AV1 Video Encoder

    Phoronix: SVT-AV1 0.5 Released As Intel's Speedy AV1 Video Encoder

    While we have been reporting on and benchmarking the Intel SVT video encoders since February, they were only officially announced last month and this Sunday marks their first tagged release for the AV1 encoder in the form of SVT-AV1 0.5.0...

    http://www.phoronix.com/scan.php?pag...0.5.0-Released

  • Gusar
    replied
    Originally posted by sophisticles
    I really think Jason decided to become Fiona so he could go fuck herself.
    Wow. Just... wow...
    Last edited by Gusar; 05-22-2019, 04:02 PM.

    Leave a comment:


  • dwagner
    replied
    Originally posted by Gusar View Post
    This, btw, is another point of bad encoder tests - too high a bitrate.
    That entirely depends on your use-case. When my use case is to archive raw video footage from my camera at reasonable cost, then encoding at bitrates high enough to not suffer from any visible quality loss is exactly what I am interested in, and comparing encoders for this scenario makes perfect sense for me.

    x264 for example is very good for that purpose, while x265 or any AV1 encoder are still way too slow for me to wait until my recording media has been backed up and is ready to re-use for the next shots. Plus, the claims that HEVC allows for half the bit-rate compared to H.264 is clearly wrong for high quality targets - I have tested that repeatedly myself, and see lots of compression artifacts in HEVC encodings that are half the size of H.264 encodings that look pristine.

    As for Netflix, I understand that their use case is very different - encoding costs are almost irrelevant for them, as encoding is done just once or a few times, while every byte saved on delivery raises their profit, as does lowering quality.

    Leave a comment:


  • Gusar
    replied
    Originally posted by sophisticles View Post
    I nearly always use CRF 15, preset very slow and tune PSNR.
    At CRF 15, the bitrate will be so ridiculously high, the tuning will highly likely not matter. This, btw, is another point of bad encoder tests - too high a bitrate.

    Originally posted by sophisticles View Post
    And my "issue" with Jason/Fiona, is that he was full of crap, he lied about so many things, GPU acceleration, psychovisual optimizations, the list goes on.
    GPU encoders went nowhere, what we have nowadays are dedicated encoder ASICs, so she was right about GPU acceleration. And it's only you who claims psy enhancements to be a lie, the quality of x264 clearly demonstrates otherwise. If the list goes on, please do continue it.

    Originally posted by sophisticles View Post
    He took credit for shit he didn't invent or conceive of
    Where's the proof of that? You're making bold claims here, but unless you can back them up with links to where she said these things, all you have is baseless attacks against a person, because of some personal beef.

    Originally posted by sophisticles View Post
    He has built up a cult-like following, almost becoming a folk hero by claiming to have invented and created stuff that predated his software by nearly a decade.
    Unless you show proof, what you're saying here is entirely false. What she did is do is not invent these things, but managed to do a really good implementation of them, which resulted in a kick-ass encoder. There's no "cult of personality" here, but a technically extremely well implemented encoder. The only thing she claims was her idea is MB-tree: https://web.archive.org/web/20120910...cx/archives/98 <- but even here she clearly states the inspiration was observing what another encoder did and trying something similar on a different scale (not limited to I-frames like in that other encoder).

    Basically, back up your claims, otherwise all you have is personal attacks.

    Leave a comment:


  • sophisticles
    replied
    Originally posted by Gusar View Post
    sophisticles Have you actually done what I wrote about, use the different tunings of x264? Because I have. Three encodes of the same video, each 2-pass target bitrate (to eliminate video size as a variable), the only difference being --tune={psnr,ssim,film}. The first one will look terrible, the second one passable, the third one will actually look good. That's not "gospel" or whatever you're on about, that's a *fact* that each person can verify for themselves.

    Or, if you don't want to use x264 because of your personal issue with Dark Shikari, use theora. The last libtheora release is 1.1, it optimizes for PSNR. Then use the git version, which optimizes for SSIM. Do an encode with each version, 2-pass target bitrate, and look at the result. The 1.1 release produces garbage, an unusable blurry mess. The git version is passable at least. And this is again something that each person can verify for themselves, this isn't me trying to preach "gospel" or some such.

    PS. Dark Shikari never claimed he invented AQ. Unless you can provide a link to prove it, then it's *you* who is spreading FUD. And if you thing using words like "His Darkness" and "worshipers" gives any credibility to your statements, think again.
    Go search the videohelp forums and you tell me if I have done any encoding tests.

    As for DS, he most certainly has claimed he invented AQ and he ruthlessly promoted it and tried to market it, especially to the Main Concept folks (who btw include 3 different types of AQ in their encoder).

    As for x264, I do use it once in a while and when I do I always turn off all psychovisual crap, if I wanted something to mess with my head I would still be with my ex-girlfriend. I nearly always use CRF 15, preset very slow and tune PSNR.

    And my "issue" with Jason/Fiona, is that he was full of crap, he lied about so many things, GPU acceleration, psychovisual optimizations, the list goes on. He took credit for shit he didn't invent or conceive of, psy optimizations where present in the DivX encoder, AQ was patented back in 1995, RDO was patented back in 1998, trellis was patented back in 1997:

    https://patents.google.com/patent/US5650860A/en

    https://patents.google.com/patent/US...n+optimization

    https://patents.google.com/patent/CA...lis&oq=trellis

    He has built up a cult-like following, almost becoming a folk hero by claiming to have invented and created stuff that predated his software by nearly a decade.

    Leave a comment:


  • andreano
    replied
    Just use VMAF and pretend that it works.

    It is the best objective metric, and would be infinitely better than nothing, which is the current state of Phoronix encoder testing.
    Last edited by andreano; 05-21-2019, 12:47 PM.

    Leave a comment:


  • Gusar
    replied
    sophisticles Have you actually done what I wrote about, use the different tunings of x264? Because I have. Three encodes of the same video, each 2-pass target bitrate (to eliminate video size as a variable), the only difference being --tune={psnr,ssim,film}. The first one will look terrible, the second one passable, the third one will actually look good. That's not "gospel" or whatever you're on about, that's a *fact* that each person can verify for themselves.

    Or, if you don't want to use x264 because of your personal issue with Dark Shikari, use theora. The last libtheora release is 1.1, it optimizes for PSNR. Then use the git version, which optimizes for SSIM. Do an encode with each version, 2-pass target bitrate, and look at the result. The 1.1 release produces garbage, an unusable blurry mess. The git version is passable at least. And this is again something that each person can verify for themselves, this isn't me trying to preach "gospel" or some such.

    PS. Dark Shikari never claimed he invented AQ. Unless you can provide a link to prove it, then it's *you* who is spreading FUD. And if you thing using words like "His Darkness" and "worshipers" gives any credibility to your statements, think again.

    Leave a comment:


  • sophisticles
    replied
    Originally posted by Gusar View Post
    It's exactly at the doom9.org forums that people, including and especially x264 developer Dark Shikari, have been harping for years how psnr and ssim aren't a measure of video quality. So it's really funny that you link to doom9.org of all places as supposed proof that my post was "hysterical". Nothing hysterical about my post, it's exactly from doom9.org and Dark Shikari's blog that I have my knowledge from. For example, this: https://web.archive.org/web/20150119...x/archives/458
    I know all about the doom9 threads, I was a member there for years before I got banned for arguing with His Darkness. Jason AKA Fiona AKA Dark Shikari has been one of the most disingenuous individuals ever. In an effort to ruthlessly promote his software encoder he spread lie and lie and it became gospel among those that do not have the technical background to see through his BS. This was a guy that didn't even have his Comp Sci degree yet, that passed himself off as a video encoding expert and talked crap every chance he got.

    When he stole AQ from other software encoders, sorry I mean "invented" it, and it was found that in some case it lowered PSNR and SSIM (both mathematically based engineering concepts BTW) he started the FUD campaign of how PSNR and SSIM can't be trusted, that only "your eyes" can be trusted and he started that garbage about how "your eye doesn't want to see a picture similar to the source but rather one that has similar complexity". Of course he never bothered to explain who the fuck he is to tell me what my eye wants to see, what "similar complexity" actually means (it's jargon, mish-mash that doesn't actually mean anything). He also never explained how it is that only "your eyes" can be trusted when everyone;s opinion of what looks better is different and when people's eyes perceive pictures differently and when different quality monitors, room lighting and drivers will produce different quality results.

    Subjective measurements, like Dark Shikari and all his worshipers espouse are inherently flawed because they are just an opinion, determining the quality of an encoder by "what looks better" is like saying "which car is the nicest" or "which cheeseburger tastes the best"?

    These are questions that have no valid answer, you can say which car is quicker 0-60, which has more horsepower, which one has the lower coefficient of drag and so on but you can't say this car is nicer.

    I'm telling you right now, if you encode 2 files and one has a PSNR of 47 dB across all three, YUV and one has a PSNR of 50 dB across YUV, then the latter is of higher quality. The mistake people make which leads them to believe that PSNR is not a good indicator of quality is that they usually only look at the Y channel, for instance the first will have PSNR-Y of 47 dB but PSNR-U and PSNR-V of 45 dB and the second will have PSNR-YUV of 46 dB and obviously the latter will be of higher quality but people will only look at PSNR-Y measurement and then say "see, PSNR is a poor indicator of quality".

    If people knew what they were doing they wouldn't believe such silly things.

    Leave a comment:


  • Gusar
    replied
    Originally posted by sophisticles View Post
    Where this same x264 developer talks about using SSIM internally to gauge encoding quality and he has also admitted that x264 uses PSNR internally to determine quality.
    Yes, *specific* tools inside the encoder use these metrics to make decisions. Other tools make use of "magic" lambda values that were determined by... encoding videos several times with different settings and then _looking_ at the results and deciding which lambda value works best. AQ was tuned in a huge thread at doom9.org where people were testing different settings and then also _looking_ at the resulting encodes and providing sample pictures.

    So PSNR and SSIM have their uses, but they are worthless to determine *overall* video quality. You can't do two encodes, measure their psnr/ssim and determine the winner. Because you'll be pleasantly surprised if you actually go and look at those encodes. This can easily be proven by x264's tune switch - the psnr and ssim tunings will give higher scores in those metrics but it'll be the film tuning (or animation tuning if encoding classic cell-shaded animation) that gives the best video.

    It's exactly at the doom9.org forums that people, including and especially x264 developer Dark Shikari, have been harping for years how psnr and ssim aren't a measure of video quality. So it's really funny that you link to doom9.org of all places as supposed proof that my post was "hysterical". Nothing hysterical about my post, it's exactly from doom9.org and Dark Shikari's blog that I have my knowledge from. For example, this: https://web.archive.org/web/20150119...x/archives/458

    Leave a comment:


  • sophisticles
    replied
    Originally posted by dwagner View Post
    I don't see what you are trying to say here. If SVT-AV1 is as slow as "x264 --preset veryslow", while not providing better quality at the same bit rate, then clearly x264 is the better choice for that bit rate. As others already stated: A comparison of only speed without the other dimensions considered is useless.
    As I pointed out, x264's very slow preset is considered "mastering" quality and CRF 15 is considered visually lossless relative to the source, by definition as far as quality is concerned, it's impossible to beat x264 with those settings from a pure quality standpoint.

    What you can only do is match that quality at a smaller file size, which SVT-AV1 does and/or beat it in terms of encoding speed, which as evidenced by Michael's numerous tests, Intel's SVT family of encoders does easily.

    Originally posted by dwagner View Post
    Only if their combination of result dimensions becomes more competitive.
    Currently, I see a lot of cases where even H.264 is preferable over HEVC, simply because spending lots of extra encoding CPU time comes with a very slim, if at all, advantage in image quality. Especially when plenty of bit rate is available, HECV isn't preferable to H.264, and AV1 will certainly be much worse for a long time.
    The x265 people have already embraced Intel's SVT-HEVC and now allow you to use the Intel encoder from with the x265 framework:

    https://x265.readthedocs.io/en/default/svthevc.html

    http://x265.org/x265-svt-hevc-house/

    When a major player in the encoder market incorporates support for a competitors product into their framework, the writing is on the wall.

    Leave a comment:

Working...
X