Announcement

Collapse
No announcement yet.

FFmpeg 7.0 Released With Native VVC Decoding & Multi-Threaded CLI

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by caligula View Post

    The thing is, companies like Intel already advertised almost 10 years ago that their integrated Quick Sync stuff (a tiny part of the CPU) does 12x real-time encoding, e.g.: https://www.intel.de/content/dam/www...ideo-guide.pdf - the same performance was available on 15W TDP laptop chips.

    Now, 8 years later you buy the highest end consumer CPU + best GPU and get 5x real-time. I see this as a problem.
    Only in your mind. You need to keep in mind, the 12x from 10 years ago isn't with the resolution and bit rates of what you'll be using today. Also, it completely depends on the encoder. With SVT-AV1 I can easily hit 30 fps with 1080p30 and moderate bitrate on my 8th gen i5. And because the VP9 encoder has been disabled due to being too rudimentary, libvpx-vp9 for the same content can only do 5 fps. Also, your case only is a problem because you don't use hardware accelerators, but badly optimized software encoders. Dedicated hardware will always be way faster and still more efficient than anything on a CPU.

    Comment


    • #12
      Originally posted by caligula View Post

      The thing is, companies like Intel already advertised almost 10 years ago that their integrated Quick Sync stuff (a tiny part of the CPU) does 12x real-time encoding, e.g.: https://www.intel.de/content/dam/www...ideo-guide.pdf - the same performance was available on 15W TDP laptop chips.

      Now, 8 years later you buy the highest end consumer CPU + best GPU and get 5x real-time. I see this as a problem.
      You are neglecting a few things here. 8 years ago, Intel was not claiming they could encode 12 real-time 4K streams, it was 12 720p streams, and a lesser number of 1080p streams*. Intel is talking about 5 4K streams today.

      1280x720 = 921,600 x 12 videos = 11,059,200 pixels x 30 fps** = 331,776,000 x 8bit color*** = 2,654,208,000

      1920x1080 = 2,073,600 x 6 videos = 12,441,600 x 30 fps** = 373,248,000 x 8bit color*** = 2,985,984,000

      3840×2160 = 8,294,400 x 5 videos = 41,472,000 x60 fps = 2,488,320,000 x 10 bit color = 24,883,200,000

      The cards of today are pumping almost 10 times the pixels through every second compared to then. This doesn't even bring into the picture the quality of the image, which has drastically improved over that period, which means the cards are doing significantly more work per pixel.

      Then we need to mention HDR, not just 10 bit, but the new lighting information that is needed on top of that for todays videos.

      * I might be thinking closer to 10 years back.
      ** Videos in 60fps were rare on YouTube because of this.
      *** 10 bit color in hardware started about 4-5 years ago in Nvidia.

      Comment


      • #13
        Now that h264 is like 22 years old... is there a sufficiently old encoder that is patent free? I assume that videos encoded with v1 of h264 will still work in modern decoders?

        When was h264 first added to FFMPEG?

        Comment


        • #14
          Originally posted by OneTimeShot View Post
          Now that h264 is like 22 years old... is there a sufficiently old encoder that is patent free? I assume that videos encoded with v1 of h264 will still work in modern decoders?

          When was h264 first added to FFMPEG?
          Well it was finalised in 2003, so not quite 22 yet. And that was the main profile. I believe High Profile, current mainstream usage started in 2005. So you should expect at least 2025 + 3-5 years.

          Comment


          • #15
            Originally posted by ksec View Post

            Well it was finalised in 2003, so not quite 22 yet. And that was the main profile. I believe High Profile, current mainstream usage started in 2005. So you should expect at least 2025 + 3-5 years.
            Sure - but if I just want to encode videos that can play, the main profile should be fine?

            Comment


            • #16
              Originally posted by Malsabku View Post
              Let me guess, it won't be part of Ubuntu 24.04 and Fedora 40?
              ̶T̶h̶e̶ ̶f̶f̶m̶p̶e̶g̶ ̶s̶i̶t̶e̶ ̶o̶f̶f̶e̶r̶s̶ ̶o̶f̶f̶i̶c̶i̶a̶l̶ ̶D̶e̶b̶ ̶a̶n̶d̶ ̶R̶P̶M̶ ̶p̶a̶c̶k̶a̶g̶e̶s̶. Alternatively you can add an unofficial PPA.

              Edit: seems I was wrong, the links on the site don't give you a deb package but just redirect to the current builds on the OS. There is a static build link under the package links but it doesn't seem up to date with the latest version at the time of writing. Best bet is to compile it yourself for now.
              Last edited by tenchrio; 05 April 2024, 06:23 PM.

              Comment


              • #17
                Originally posted by skeevy420 View Post

                That highlights how much a "powerful workstation" has changed in the past 10-15 years. The days of 4 to 6 cores with a high IPC being the CPU in a high-end workstation are long gone. These days, the "low end" of workstations start with AMD 8c/16t X3D chips or an Intel with 2 blazing fast 7Ghz cores, 6 regular cores, and 8 E cores. From there it's just adding more and more cores that may or may not utilize 3D cache, run at blazing fast speeds, or may or may not be E/C cores.

                That's not even considering how much more powerful something like a 5800X3D or 7800X3D is when compared to an FX 8350 or when comparing Intel generational equivalents. Not only is their "high" core count our starting core count, each modern core will get things done at least 3x faster.

                Basically, a 2014 high end workstation is less powerful than a 2024 Walmart Special. Parts are just that much better these days. Going from a high end dual CPU Intel Westmere X5680 with 48GB DDR3 1333 (12c24t) to a mediocre Zen 2 4650G APU (6c/12t) with 32GB DDR4 3800 with the same RX 580 is what opened my eyes to how much better modern hardware performs.
                I think there's a bit more nuance to what people's day to day experience would be if you look back to a slightly more recent timeframe. Your Westmere example was a 32nm CPU. Even the 22nm Haswell Xeons are almost a decade old at this point. A pair of 12 core / 24 thread 2690v3 CPUs was pretty common for high end workstations of that era. Broadwell was the first 14nm Xeon and is 8 years old now. There have been a bunch of nice compounding IPC improvements since then, but just as importantly the clock speeds have gone to the moon. My 12 core 7900X3D can out muscle a pair of 14 core / 28 thread 2690v4 Xeons in multithreaded benchmarks. It's running at 4.8GHz all-core vs. 3.2GHz for the 2690v4. The really impressive thing is that it's doing this with half the power of the dual Xeon system and the those Xeons had a list price of $2k each when they launched. But back to the point of my rambling, my day to day experience with the 7900X3D is less amazeballs than I was hoping. Where older workstations are still a great option is total cost. You can pick up a 14 core / 28 thread 2680v4 for $15 USD on eBay. A single socket Z440 with a CPU like that and 32GB of RAM for under $200 can be a pretty compelling option for a "basic computing" machine. And just to depress myself a bit more, here's a benchmark I just ran on my youngest son's Cascade Lake based Lenovo P520 and my own machine since they have the exact same GPU. At 1080p when they are CPU bound the 7900X3D is faster in a big way. At 1440p where they are GPU bound?

                Xeon W-2235 / 6700 XT

                Simple and lightning fast image sharing. Upload clipboard images with Copy & Paste and image files with Drag & Drop


                Ryzen 7900X3D / 6700 XT

                Simple and lightning fast image sharing. Upload clipboard images with Copy & Paste and image files with Drag & Drop


                Yeah, the low on the 7900X3D is 17% faster. But I paid less than $200 for the whole workstation minus the GPU for the W-2235 machine. Just the 7900X3D by itself was $391. The whole build minus the GPU again cost over 5X what the P520 did.

                Comment


                • #18
                  Originally posted by OneTimeShot View Post
                  Now that h264 is like 22 years old... is there a sufficiently old encoder that is patent free? I assume that videos encoded with v1 of h264 will still work in modern decoders?

                  When was h264 first added to FFMPEG?
                  x264 has been patent free for 20 years. What more do you ask for?

                  Comment


                  • #19
                    Originally posted by tenchrio View Post

                    The ffmpeg site offers official Deb and RPM packages. Alternatively you can add an unofficial PPA.
                    Or compile yourself. Although that's quite a pain in the butt.

                    Comment


                    • #20
                      Originally posted by Artim View Post

                      Or compile yourself. Although that's quite a pain in the butt.
                      Depending on how trusting you are, there are also people who build updated statically linked versions like these...

                      Comment

                      Working...
                      X