Announcement

Collapse
No announcement yet.

Intel FFmpeg Cartwheel 2023Q1 Brings Improved Multi-GPU Video Acceleration Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel FFmpeg Cartwheel 2023Q1 Brings Improved Multi-GPU Video Acceleration Support

    Phoronix: Intel FFmpeg Cartwheel 2023Q1 Brings Improved Multi-GPU Video Acceleration Support

    Intel's open-source "cartwheel-ffmpeg" project is their repository where they collect all of their FFmpeg patches prior to upstreaming. While the patches have been available in Git form, prior to the weekend Intel released their 2023Q1 queue of patches to this widely-used, open-source multimedia library...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Does this enable the long-awaited Intel Deep Link Hyper Encode https://www.intel.com/content/www/us...deep-link.html ?

    This feature speeds up furthere the hardware encoding/transcoding when combining an intel igpu with an intel dgpu (alchemist for now). This is supposed to be already working on Windows, but I haven't tried it myself.

    ​​I will give it a try on linux though since i happen to have a 12th gen igpu + dg2 alchemist.

    Comment


    • #3
      Originally posted by bezirg View Post
      Does this enable the long-awaited Intel Deep Link Hyper Encode https://www.intel.com/content/www/us...deep-link.html ?

      This feature speeds up furthere the hardware encoding/transcoding when combining an intel igpu with an intel dgpu (alchemist for now). This is supposed to be already working on Windows, but I haven't tried it myself.

      ​​I will give it a try on linux though since i happen to have a 12th gen igpu + dg2 alchemist.
      It's not the Deep Link / Hyper Encode that is only supported with their Windows oneVPL runtime. In this post they wrote a `gopconcat` muxer to use with the `select` filter so as multiple GPUs can transcode different frame numbers, and then concatenate the outputs according to picture groups at the muxer stage.

      Comment


      • #4
        nyanmisaka I think you are right. Btw, I have created an issue to track future hyperencode support for linux. https://github.com/oneapi-src/oneVPL...gpu/issues/279

        Comment


        • #5
          Originally posted by bezirg View Post
          Does this enable the long-awaited Intel Deep Link Hyper Encode https://www.intel.com/content/www/us...deep-link.html ?

          This feature speeds up furthere the hardware encoding/transcoding when combining an intel igpu with an intel dgpu (alchemist for now). This is supposed to be already working on Windows, but I haven't tried it myself.

          ​​I will give it a try on linux though since i happen to have a 12th gen igpu + dg2 alchemist.
          If you are transcoding (not screen recording or live stream transcoding), you could probably hack av1an to do this and get better output quality than pure ffmpeg with its chunked VMAF target encoding.

          Its not very useful unless you are really in a hurry to encode stuff... shrug
          Last edited by brucethemoose; 01 May 2023, 12:08 PM.

          Comment

          Working...
          X