Announcement

Collapse
No announcement yet.

Firefox 71 Landing Wayland DMA-BUF Textures Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by ernstp View Post

    With the move to less patent-encumbered codecs like VP9 and AV1 it's less common that you actually have fixed function hardware that supports decoding your codec.
    Really? The patent is about the hardware acceleration, not on the library used for decoding? mg

    Comment


    • #22
      Originally posted by frank007 View Post

      Really? The patent is about the hardware acceleration, not on the library used for decoding? mg
      The patent isn't about the hardware acceleration but it just happens that people have built a lot of h264 hardware and hardware is slow to catch up to new exciting opensource codec software.

      Comment


      • #23
        my only current gripe with FF on Wayland is that when it starts the screen becomes corrupted. The solution is to switch virtual desktops back and forth and then works fine.

        Comment


        • #24
          Originally posted by UlisesH View Post
          my only current gripe with FF on Wayland is that when it starts the screen becomes corrupted. The solution is to switch virtual desktops back and forth and then works fine.
          I haven't found that to happen on GNOME Wayland.

          Comment


          • #25
            Originally posted by atomsymbol

            Some notes:
            • Nowadays 3 CPU cores with AVX2 are able to decode any video type up to 4K 10-bit HDR 60Hz, the CPU might only have issues handling 8K videos
              • GPU-assisted decoding is preferable when it results in smaller system power consumption compared to CPU-only video decoding or when the CPU is busy handling other tasks in addition to video decoding
              • Decoded 4K 10-bit HDR 60Hz requires about 1-2 GiB/s of memory bandwidth. Main memory bandwidth and PCI Express bandwidth are greater than 2 GiB/s.
            • From historical perspective, HW acceleration of video decoding in x86 CPUs started with the Pentium MMX (released in January 1997)
            Power consumption of CPU decoding is still orders of magnitude higher than using a dedicated hardware decoder, which is significant for most modern computing usage (laptop and mobile devices), while still mostly irrelevant for a desktop system.

            Comment


            • #26
              You can downplay video hardware acceleration until you try to watch a video on a low powered device. Suddenly you feel like a caveman running it at 720p or even 480p, because the 720p is at 60fps.

              Comment


              • #27
                Originally posted by atomsymbol
                GPU-assisted decoding is preferable when it results in smaller system power consumption compared to CPU-only video decoding or when the CPU is busy handling other tasks in addition to video decoding
                What? You say it like it's common otherwise.
                AFAIK no CPU even with AVX2 is nearly equal power consumption wise to specialized video decoding in the GPU (if both are from the same year and vendor).

                Comment


                • #28
                  Originally posted by starshipeleven View Post

                  Power consumption of CPU decoding is still orders of magnitude higher than using a dedicated hardware decoder, which is significant for most modern computing usage (laptop and mobile devices), while still mostly irrelevant for a desktop system.
                  I agree but don't agree at all with the mostly irrelevant on desktop part because I hate it when my CPU starts making more noise, while the GPU doesn't because its dedicated hw is very efficient at this. And because I watch a lot of movies or youtube videos it matters a lot to me.

                  Comment


                  • #29
                    Originally posted by cl333r View Post
                    I agree but don't agree at all with the mostly irrelevant on desktop part because I hate it when my CPU starts making more noise, while the GPU doesn't because its dedicated hw is very efficient at this. And because I watch a lot of movies or youtube videos it matters a lot to me.
                    Indeed. I have a USB dongle that captures digital TV (1080i, 60Hz), and is remarkable how efficient a GPU can be at decoding the video. It barely gets any warmer, while doing it on the CPU on some dual core machines is a heavy task.

                    Comment


                    • #30
                      Originally posted by ernstp View Post

                      The patent isn't about the hardware acceleration but it just happens that people have built a lot of h264 hardware and hardware is slow to catch up to new exciting opensource codec software.
                      What!? It's since the flash plugin was killed! 1000 years ago!

                      Comment

                      Working...
                      X