Announcement

Collapse
No announcement yet.

Higher Quality AV1 Video Encoding Now Available For Radeon Graphics On Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    I had quite the disappointment with how bad amd's av1 encoder is. intel's and nvidia's are a decent amount better. But even then I would recommend using software encoding, both svtav1 and aomenc can do way better then all of the hardware kits available, and svtav1 does it at pretty good speeds too even without av1an. if you have a relatively beefy CPU (8 cores +). svtav1 could very well be a better choice even for streaming some games. and more if you do manual cpu core management

    Comment


    • #22
      Originally posted by avis View Post
      Hardware [accelerated] video encoding is good only for streaming and video conferencing otherwise software encoding should be preferred.
      Agreed. I've done a fair bit of testing as I would love to use the graphics card to encode at really high speed. But in my experience hardware encoding delivers worse quality and MUCH worse compression.

      Comment


      • #23
        Originally posted by avis View Post

        "Trust me bro". Not a single screenshot, not a single proof, obviously talking about bitrates low enough that no archival quality (visually lossless) can even be theoretically achieved.

        We have very different definitions of the words, sir. You prefer to throw words like "BS" without putting your money where your mouth is. You prefer superlatives which don't exist in my world unless I can vouch for them.

        When I say "excellent" encoding quality I mean the tiniest of details and imperfections are preserved. I've seen what software SVT-AV1 produces at highest quality mode and it was a horrible blurfest. Now you're claiming that hardware video encoding is "the quality was indistinguishable from the original video"? I'm sorry, but that's a grand deception.

        x264 at preset veryslow/placebo requires 30-50Mbit bitrate for achieving visually lossless encoding for 1080p/24fps video. AV1 would need at the very least 20-30Mbit to do the same. 15Mbps hardware (i.e realtime) AV1 encoding achieving the same results? That's magic. I mean it's just a blatant lie. I'm not sure we'll ever have such a codec. For instance H.264 and H.265 lossless video codecs have approximately the same compression ratio despite all the advanced in the second.
        I didn't even know that you could upload pictures on this forum...

        And you are semantically incorrect sir. "Excellent" means "Excellent". Not "lossless". What you've defined is lossless, which takes us from a near impeccable image quality to an actually impeccable image quality. That kind of difference means going from 99% image quality to 100%, and going from 10Mb/s to 50Mb/s. Which is, as per my original statement, bullshit.

        The loss of 1% or less of image quality is insignificant to almost all people, and pertains fully in ideological territory, not in real life use cases. In reality, you could put a sample of 10000 people before a blind test of a 15Mb/s AV1 HW live recording and a 60Mb/s software AV1 encoding, and find that in an A/B test, the crushing majority of them, possibly upwards of 80% of them, on a standard size monitor (say, 1080p 24', 1440p 28', or 4K 32' or even 40' monitors), would answer on guesswork to "which image is the better looking one".

        Idealism isn't going to make people's videos better. It will however fill my SSD at a blazingly pointless speed.
        I've been very happy with my HW recordings, and I'd wager that a man with an RTX 4080 would be even more happy with his, and yet both him and I would find it difficult to clearly differentiate the image quality without zooms and detailed studying.

        Comment


        • #24
          Originally posted by lakerssuperman View Post
          For the home server use case I target, whether it be Plex or Jellyfin or whatever, the conversation for encoding always comes down to encode time/file size/visual quality. There is a balance to be struck. If I can get the same visually quality in a lower video size at the cost of encoding time, my personal choice would be to go that route. Some people want quicker encodes and that's where hardware encoding comes in. For them, a good hardware encode is enough. Maybe they don't have a discerning eye or maybe they have an old tv that you can't see the difference in detail levels.
          Quite perfectly summarised.
          I have a 28 inch 4K monitor and I'm quite picky with PPI; my 32 inch monitor right next to it seems to "not have enough" to my eyes.
          And yet, the moment where I felt like HW encoding was insufficient at the 10-20Mb/s range never showed up. You can feel improvement between 10, 12, 15, 18, sure. But it's very clearly diminishing returns, especially past 15 where the difference is hard to find.

          Practicality means usability. Usability of software AV1 is very low with the strain on CPUs, and quite pointless for 99% of cases and users. HW is an excellent option, and the first AV1 encoding gens from Intel, AMD, and Nvidia, has been overall very satisfactory IMO. AMD is, as usual, behind the two others, but not so seriously that they're not an entirely viable option, just the least good option. The time where their H264 option was so below others that it discarded them for HW encoding is...still here, but it's superseded by an AV1 encoding where there is no serious complaint. They should be proud of themselves on this. It's far from perfect to set up (docs are missing, the common AMD disease), but if you strike that balance, for streaming or recording, really, I literally cannot complain. Can't even find a real gripe. I'm producing better quality than 99% of the currently online videos that were live recorded or streamed (not counting those recorded with a camera, obviously).

          Comment


          • #25
            Originally posted by Mahboi View Post
            Usability of software AV1 is very low with the strain on CPUs, and quite pointless for 99% of cases and users.
            This is absolutely not the case at ALL svtav1 works fine on MANY x86 cpus, achieving or surpassing real time encoding with typically same or better results then libx265. it only becomes unusable when you have a real potato of a cpu. for people with good cpus, you will get equal to or better real time preformance with quality:size ratios surpassing that of hardware encoders easily. will it be faster then hwenc? no, but equal to or greater then realtime being 30fps for pre-recorded content or 60fps for streams (my baseline is 1080p streams) can be achieved with a ryzen 2600 while playing ligher cpu games with some decent core mangement.

            with a 5600 you can easily surpass gpu encode.

            Comment


            • #26
              Mahboi

              People who really work with video and video codecs can instantly recall the website where images can be compared, and that's https://imgsli.com

              I'm no longer interested in this conversation and any previous replies. Sorry. I've asked to provide images to compare at least three times, no one has.

              I care about visually lossless encoding a lot. No, it's not PSNR/SSIM (those can be cheated and I find them insufficient) or anything like that. I trust my eyes. In fact that's the only way I encode/reencode videos. Almost no one in this discussion does. That's the end of the story.

              Consumer HW video encoders totally suck. If you're OK with their quality, so be it. I'm not content with what libaom (an original software AV1 encoder) can produce at very high bitrates which makes it clear that what you find "sufficient" and "good enough" is a horrible blurfest which decimates a ton of fine details. We just need two completely different things from a video codec.

              Visually lossless is when I compare the source and the result and I find no differences aside from maybe slightly altered colors.

              I could have given you a short video clip to encode using your favorite archiving preset or method and show right away how horrible it is. Alas, you don't care about that.

              Actually, here's a sample 10 seconds clip. The source video is 15Mbps, yeah 1024x768 at 15fps. Quite a lot but that's MJPEG. Try to compress it to anything below that (even 10Mbps will suffice) and I'll show you how your "good enough" is absolute baloney.
              Last edited by avis; 13 October 2023, 04:48 AM.

              Comment


              • #27
                Originally posted by avis View Post
                I'm no longer interested in this conversation and any previous replies.
                Thanks god. Now the productive discussions may begin.​

                Comment


                • #28
                  Originally posted by lakerssuperman View Post
                  For the home server use case I target, whether it be Plex or Jellyfin or whatever, the conversation for encoding always comes down to encode time/file size/visual quality. There is a balance to be struck. If I can get the same visually quality in a lower video size at the cost of encoding time, my personal choice would be to go that route. Some people want quicker encodes and that's where hardware encoding comes in. For them, a good hardware encode is enough. Maybe they don't have a discerning eye or maybe they have an old tv that you can't see the difference in detail levels.
                  Yes, I have the (non-transcoded) media from a range of legal sources going back to things like MPEG-2 DVDs on my server. If it cannot direct play, Jellyfin picks the best hardware encoder it has, that the client supports, to transcode that on the fly. On my home network it can throw so many bits at it that it will be nearly visually indistinguishable to the original for me with nearly any codec. If I am away from home with limited bandwidth the codec can make a noticeable difference.

                  At the moment most things end up H.264 because of client support. It will use H.265 if the client supports it, but most don't and my H.265 hardware encoder doesn't seem as optimised as its H.264 one, so I don't see as much benefit as I had hoped. I hope that AV1 will give me a bit bigger bump, because people seem more interested in comparative performance, and broader client support (largely because I believe Google is pushing device manufacturers to support AV1 for YouTube).

                  If you look at something like (pulled basically at random, but it is along the lines of many others):

                  image.png
                  (From Tom's Hardware, Image credit: YouTube - EposVox)​

                  It is basically what everyone is saying. Software encoders outperform hardware encoders. But if I can get the equivalent of x264 very slow in hardware with no CPU use and low power, that is great for my use case. If I was converting something once to be used in the other format forever, I would use a software encoder on high quality settings.
                  Last edited by Aaron; 13 October 2023, 04:56 AM.

                  Comment


                  • #29
                    Originally posted by Aaron View Post

                    Yes, I have the (non-transcoded) media from a range of legal sources going back to things like MPEG-2 DVDs on my server. If it cannot direct play, Jellyfin picks the best hardware encoder it has, that the client supports, to transcode that on the fly. On my home network it can throw so many bits at it that it will be nearly visually indistinguishable to the original for me with nearly any codec. If I am away from home with limited bandwidth the codec can make a noticeable difference.

                    At the moment most things end up H.264 because of client support. It will use H.265 if the client supports it, but most don't and my H.265 hardware encoder doesn't seem as optimised as its H.264 one, so I don't see as much benefit as I had hoped. I hope that AV1 will give me a bit bigger bump, because people seem more interested in comparative performance, and broader client support (largely because I believe Google is pushing device manufacturers to support AV1 for YouTube).

                    If you look at something like (pulled basically at random, but it is along the lines of many others):

                    image.png
                    (From Tom's Hardware, Image credit: YouTube - EposVox)​

                    It is basically what everyone is saying. Software encoders outperform hardware encoders. But if I can get the equivalent of x264 very slow in hardware with no CPU use and low power, that is great for my use case. If I was converting something once to be used in the other format forever, I would use a software encoder on high quality settings.
                    Great pull!! EposVox did a great job highlighting how good the AV1 hardware encoding is in those graphs. Totally my point as well. If you can approach x264 veryslow with hardware that's damn impressive even if software encoding is still better for maximum quality.

                    Comment


                    • #30
                      Originally posted by sophisticles View Post
                      ...
                      The reality is that none of this matters, everything i have read indicates the the fight for encoding supremacy will be fought among the different VVC encoders.
                      ...
                      VVC could cure cancer and I still wouldn't use it.

                      Comment

                      Working...
                      X