Announcement

Collapse
No announcement yet.

H.264 SVC / Temporal Encoding Wired Up For AMD's Linux Graphics Driver

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    SVC is patented, hence to be avoided.

    Comment


    • #12
      Originally posted by M@yeulC View Post

      I transcode videos with ffmpeg on my personal jellyfin instance, and recently bought a WX 2100 (60€ second hand) for that task, I've been quite happy with it quality-wise, although it could be a bit faster. Definitely better than the old i3-2130's HD Graphics 2000 quicksync encoding.

      Edit: The WX 2100 I have is Polaris 12 (Lexa), same gen as the RX 550. It uses VCE 3.4, not VCN.
      I look forward to trying a Navi GPU when those are accessible price-wise.

      I certainly think there's a market for dedicated HW encoders, but it could be a small one.
      Yeah, the "Lexa" GPU is precisely what the RX 550 that I have is.

      I mean the quality of VCE is not bad per se, but if you compare it to other encoders you can easily see a big difference.
      But if you're happy with what you have, excellent. That's the whole point.

      And yeah there is definitely a market for HW encoders. If you encode videos often, it's just impractical to use x264 or x265. They're extremely slow compred to GPU encoders, even on fast CPUs.

      Comment


      • #13
        Originally posted by idash View Post

        I don't think it will help much.
        The reason I say that is that the quality of video encoding on Radeon GPUs is surprisingly so bad that I wouldn't recommend it at all if video encoding is a big deal to you.

        I (thankfully) have access to GPUs from the 3 makers, and I used FFmpeg to test them. I spent much time trying to tune the settings of every encoder to get the best possible quality out of it.
        The results were like this: Nvidia's NVENC is hands down the best (arguably on par with x264), Intel's QSV is decent, and Radeon is the worst, whether using AMF on Windows or VA-API on Linux (which is even worse, the bottom of the barrel).

        So it seems that Radeon still have a long way to reach the quality of even QuickSync.
        I'm not an expert though, so I may have missed a couple of tweaks here and there, but I still don't think it would help much. It's that bad.

        Things may have improved with VCN, I don't know (what I have is an RX 550 in a laptop).

        PS: You may have noticed I haven't said "AMD" at all. It's been over a decade now and I still think of Radeon as ATi rather than AMD, lol
        Nobody serious uses AMD hardware encoding. It was hoped that with the RDNA2 cards they would up their game on the hardware encode front as they had taken a lot of stick on the previous cards. But alas no. Pretty well every one with an AMD card renders to a big intermediary file format and then re-encodes into h.264 with ffmpeg on the CPU. I have come to the conclusion that Peggy Sue doesn't care about creators at all. AMD basically owns gaming playback and they are happy to stay in their lane.

        h.264 will continue to do well over the short term but every one is releasing cards with AV1 encode/decode support, phones are shipping with AV1 decode and all the big players are going to be pushing their stuff in AV1. Very few people will re-encode AV1 to h.264. Existing content will exist for a long time but video has a short tail so AV1 will take over relatively quickly. h.265 will never reach the heights h.264 reached.

        Comment


        • #14
          People with AMD cards will use the HEVC HW encoder, and then transcode using x264 slow preset for sharing

          Comment


          • #15
            Originally posted by M@yeulC View Post
            It's a shame SVC isn't used more in the FOSS world. I think Google Chrome and AOM have some support for that, but I haven't seen a suitable container format, nor do I think ffmpeg supports it (I only found this question from 11 years ago).
            The container is RTP. SVC isn't for so much for static storage, it's better targeted for real-time transmissions. SVC works fine in Chrome for VP9 and AV1 codecs too. H264 SVC is patented, so most people will avoid using it, especially when most HW decoders don't support it anyway.

            Comment


            • #16
              Originally posted by Orphis View Post

              SVC isn't for so much for static storage, it's better targeted for real-time transmissions.
              Yeah, that's one of the gripes I have with it. My ideal use-cases is more static, ahead-of-time encoding, otherwise a lot of the usual SVC criticism applies.

              I feel like static storage hasn't been explored enough. It would do wonders for peertube where peers could exchange parts of the same file, regardless of the quality they pick, for instance.
              Jellyfin could arguably use RTP, but would need to generate multiple streams when transcoding. A client could buffer the baseline more aggressively, to allow for a smoother fallback.

              Comment


              • #17
                Originally posted by M@yeulC View Post
                Yeah, that's one of the gripes I have with it. My ideal use-cases is more static, ahead-of-time encoding, otherwise a lot of the usual SVC criticism applies.

                I feel like static storage hasn't been explored enough. It would do wonders for peertube where peers could exchange parts of the same file, regardless of the quality they pick, for instance.
                Jellyfin could arguably use RTP, but would need to generate multiple streams when transcoding. A client could buffer the baseline more aggressively, to allow for a smoother fallback.
                Remember that SVC isn't meant to be the most efficient way to encode a stream for a resolution. It's fine when it comes to real-time as you have different priorities, but otherwise, it's not a great tradeoff. Segmented files at different resolutions that you would get in HLS work just fine and is easy to put behind a CDN. SVC requires an active server connection to receive packets for the right resolutions too, which doesn't always scale too well.

                As for media centers, since you don't need it real time, you can spend enough time buffering the content and having a good estimate of the available bandwidth, which you can then feed as parameters to your transcoder. It'll do a better and easier job with a single spatial layer with a bitrate target than with multiple layers. You have lots of HW encoders that will generate the stream for you with a single spatial layer, but SVC encoding, for any codec, is always a landmine in HW. Decoding works sometimes a bit better, but I've seen lots of decoders failing with valid bitstreams too.

                It's because of all those HW issues that you rarely had browsers use hardware acceleration for real-time usecases (aka WebRTC).

                Comment

                Working...
                X