Announcement

Collapse
No announcement yet.

OpenShot 2.5.1 Released With Performance Improvements

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • OpenShot 2.5.1 Released With Performance Improvements

    Phoronix: OpenShot 2.5.1 Released With Performance Improvements

    Released last month was the big OpenShot 2.5 release that brought hardware acceleration for video encode/decode via VA-API and NVENC/NVDEC, SVG vector graphics support, Blender 2.8+ integration support, import/export to Adobe Premiere and Final Cut Pro, and much more. Out now is OpenShot 2.5.1 with a few more improvements sprinkled on top...

    http://www.phoronix.com/scan.php?pag...2.5.1-Released

  • #2
    Nice but why not port the rendering pipeline to OpenCL/Vulkan/OpenGL in a future?

    (Before you say "But today we have 64-core Threadripper CPUs", I have to tell you that laptop users would benefit, plus it is expensive to own such a CPU.)

    Comment


    • #3
      Originally posted by tildearrow View Post
      Nice but why not port the rendering pipeline to OpenCL/Vulkan/OpenGL in a future?

      (Before you say "But today we have 64-core Threadripper CPUs", I have to tell you that laptop users would benefit, plus it is expensive to own such a CPU.)
      if vaapi works its not a problem

      Comment


      • #4
        Originally posted by andre30correia View Post

        if vaapi works its not a problem
        But why? What about 4K material (as an example)?

        4K rendering is definitely harder to do... (Kdenlive even struggles at previewing 4K videos)

        Edit: Ugh, one like.
        Comments like these are why open-source video editing will be a perennial dream.
        Almost everyone in the closed-source scene has implemented hardware rendering by now... and us?:

        - Blender: Not for the VSE.
        - Cinelerra: nope.
        - Natron: nope. None of the default plugins support hardware acceleration, even though the program claim it is capable of doing so.
        - OpenShot: nope, except for encode/decode.
        - Kdenlive: yes, partially. However it is buggy, and only supports like 5% of effects/features. The rest is still done in software.
        - The rest of the MLT-based video editors: nope. Never.
        - PiTiVi: nope.
        - LiVES: nope.
        Last edited by tildearrow; 03-03-2020, 09:06 PM.

        Comment


        • #5
          Originally posted by tildearrow View Post

          But why? What about 4K material (as an example)?

          4K rendering is definitely harder to do... (Kdenlive even struggles at previewing 4K videos)

          Edit: Ugh, one like.
          Comments like these are why open-source video editing will be a perennial dream.
          Almost everyone in the closed-source scene has implemented hardware rendering by now... and us?:

          - Blender: Not for the VSE.
          - Cinelerra: nope.
          - Natron: nope. None of the default plugins support hardware acceleration, even though the program claim it is capable of doing so.
          - OpenShot: nope, except for encode/decode.
          - Kdenlive: yes, partially. However it is buggy, and only supports like 5% of effects/features. The rest is still done in software.
          - The rest of the MLT-based video editors: nope. Never.
          - PiTiVi: nope.
          - LiVES: nope.
          Rendering, by definition, means encode, so as you pointed out OpenShot does support hardware encode.

          If you meant GPU powered filters, the ShotCut has you covered. For whatever reason the developers have hidden the option in a config file, it used to be available as an "experimental" feature, they claimed it was buggy but I never had a problem with it and have been using it for years.

          It also is very fast, I have 3 systems, a Ryzen 1600 with 8 GB ddr4 2400, a i7 4790 based Xeon with 16 GB of ddr3 1600 and a i3 7100 with 16 GB ddr4 2666; using a 50 minute 4k file and applying a sharpen, a saturation, a contrast and a low pass filter, the i7 takes over 9 hours to encode that file to x264+very fast with ShotCut. The 1600 is barely faster, due to the lower clock speed and the bottleneck caused by this filter combination. If I add a GTX1050 to the mix and enable GPU powered filters, this time drops to about 6 hours. If I try and use NVENC the time actually increases to about 6.5 hours.

          The i3 7100, using the iGPU with GPU filters enabled and vaapi encoding, is the fastest by a bunch, finishing in just under 5 hours. Long story short, try ShotCut.

          Side note, I can't wait for the Comet Lake based i3 to come out.

          Comment


          • #6
            Originally posted by Spooktra View Post

            Rendering, by definition, means encode, so as you pointed out OpenShot does support hardware encode.
            No. By "Rendering" I mean the process in where the video frames are generated. Not when they are encoded.

            Originally posted by Spooktra View Post
            If you meant GPU powered filters, the ShotCut has you covered. For whatever reason the developers have hidden the option in a config file, it used to be available as an "experimental" feature, they claimed it was buggy but I never had a problem with it and have been using it for years.

            It also is very fast, I have 3 systems, a Ryzen 1600 with 8 GB ddr4 2400, a i7 4790 based Xeon with 16 GB of ddr3 1600 and a i3 7100 with 16 GB ddr4 2666; using a 50 minute 4k file and applying a sharpen, a saturation, a contrast and a low pass filter, the i7 takes over 9 hours to encode that file to x264+very fast with ShotCut. The 1600 is barely faster, due to the lower clock speed and the bottleneck caused by this filter combination. If I add a GTX1050 to the mix and enable GPU powered filters, this time drops to about 6 hours. If I try and use NVENC the time actually increases to about 6.5 hours.

            The i3 7100, using the iGPU with GPU filters enabled and vaapi encoding, is the fastest by a bunch, finishing in just under 5 hours. Long story short, try ShotCut.

            Side note, I can't wait for the Comet Lake based i3 to come out.
            Really? A 50 minute 4K video on a video editor taking 5 hours? Come on... If the video runs at 30fps it should take no more than 2 hours!

            Also, Shotcut uses Movit for hardware-accelerated rendering so does Kdenlive so yeah...
            Plus using the hardware-accelerated codepath requires rewriting your project entirely to use the hardware effects... ...unlike many closed-source video editors in where you just toggle an option et voilà, hardware acceleration... no need to change anything else in your project...
            And even the hardware-accelerated codepath still does a lot of GPU-CPU transfers... I might understand some of these, like for retrieving the final frame or sending the decoded video frames, but... not if it is to send a filtered frame back to the CPU, right? Isn't that redundant? (this may explain the 5 hour time)
            Last edited by tildearrow; 03-04-2020, 04:52 PM.

            Comment


            • #7
              Oh nice. I tried OpenShot.





              It crashed. What an amazing piece of software!
              Good job guys, you made the best video editor in the world ever! I have made countless amounts (hint: zero) of videos with this tool, and it seems to work!




              ImportError: libselinux.so.1: cannot open shared object file: No such file or directory
              Last edited by tildearrow; 03-05-2020, 04:09 AM.

              Comment


              • #8
                Originally posted by tildearrow View Post
                Bunch of stupid shit.
                I just realized you do not know wtf you are talking about.

                Comment


                • #9
                  Originally posted by Spooktra View Post

                  I just realized you do not know wtf you are talking about.
                  What "stupid s**t"?! Are you kidding me!

                  ​​​​​​Instead of trying to win the argument like this, you got to let me explain!

                  The process of exporting a project from a video editor goes like this:

                  1. Render a frame
                  2. Encode this frame and output to file
                  3. Repeat until we reach the end of the timeline

                  If in your mind "render" means "encode" then please tell me: Where do the frames come from? Where is the process which generates the frames to be encoded?
                  Sure, maybe you are including "render" when you say "encode", but then that's not just encoding...

                  The problem with open-source video editors is that they do not use the graphics card for rendering the frames (which includes compositing and effects and you know). Instead, they do everything on the CPU, because most of these programs were engineered in an era (2000's) where the CPU was enough to do the rendering, and graphics cards weren't so flexible.

                  But then, fast-forward a few years, and graphics cards have become capable of doing so many things, including compute, which made them viable for moving the rendering pipeline to hardware. Proprietary software industries took advantage of these features, and so most of them support hardware acceleration for rendering.
                  In the meanwhile, the open-source community got stuck with the idea that "the CPU can do everything", therefore open-source video editors are still stuck in the software era.

                  There are two big blockers to having proper full-hardware-accelerated rendering on video editors:

                  - MLT. This was not designed with graphics cards in mind from the start, and therefore it is underperforming. And the problem is that there are so many video editors using MLT as a base. Yes, they are catching up now and adding *preliminary* GPU acceleration support with Movit and possibly WebVfx. However as I said before it is preliminary, and not all effects are compatible with it. Plus they decided to make it so incompatible that it requires rewriting your projects to be GPU-compatible. Last time I saw in Kdenlive, I think only 3 effects are hardware-accelerated :<
                  - frei0r. Yes, it's 2020 and several open-source video editors STILL use this ancient piece of crap that will NEVER support hardware acceleration, because it was designed around the CPU again.... ugh.

                  OpenShot recently got hardware acceleration for decoding/encoding, which actually MAY help to some degree, but not much.

                  Comment


                  • #10
                    Originally posted by tildearrow View Post
                    Almost everyone in the closed-source scene has implemented hardware rendering by now... and us?:
                    Oh, you mean like Olive that uses GLSL shaders and is moving to OSL in git master?

                    Comment

                    Working...
                    X