Announcement

Collapse
No announcement yet.

Firefox 80 To Support VA-API Acceleration On X11

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    Originally posted by bug77 View Post
    Freeing the CPU from some tasks usually yields a smoother experience. Also, when playing back several streams, without hardware acceleration even a modern CPU will choke. Fast.


    That still works out to 15*1920*1080 or ~31MFLOPS every frame (~16ms). A CPU shouldn't even notice that, especially with SIMD.
    Of course, it's not just the transformation, so I'd accept a 5-10% CPU overhead. Anything on top of that, just screams of sloppy programming somewhere in the stack.
    That's assuming all the data is in L1 cache. But I think you are somewhat right - YUV - RGB translation apparently is not the main show stopper. It just adds to it and doing it on the GPU is more efficient. BTW, this motivated the the whole DMABUF implementation in the first place, see https://bugzilla.mozilla.org/show_bug.cgi?id=1580169

    Comment


    • #62
      Originally posted by pal666 View Post
      you are being silly. process node advancements apply to progress from 8 years old intel cpu to 14nm intel cpu.
      Apparently there's a language barrier here. What I meant is, 7nm CPU is more power efficient than say 40nm CPU (original RPi process node). In a similar way, 7nm GPUs are more power efficient than 40nm GPUs and 7nm DSPs are more power efficient than 40nm DSPs. So the advances in process node technology can benefit all types of video decoding.
      but 8 year old intel cpu wasn't able to play video.
      Irrelevant. I wasn't claiming anything like that. My claim was, if a 8 year old $25 computer could decode H.264, a modern $1000 computer should easily be able to decode the same video efficiently, thanks to multiple improvements in hardware technology.
      (and btw intel's 14nm is 6 years old)
      Intel's latest desktop arch (Comet Lake) is still at 14nm.
      it is better in many ways, but it is worse in hardware video decode way(especially when hardware video decoding parts of your laptop aren't used)
      Not really - the modern notebooks are so powerful you can do everything the original RPi does without any kind of hardware acceleration. RPi might be able to do some low level real-time bit banging faster than Intel, but then again its GPIO isn't that fast.
      Last edited by caligula; 07-05-2020, 10:03 AM.

      Comment


      • #63
        Originally posted by treba View Post

        That's assuming all the data is in L1 cache. But I think you are somewhat right - YUV - RGB translation apparently is not the main show stopper. It just adds to it and doing it on the GPU is more efficient. BTW, this motivated the the whole DMABUF implementation in the first place, see https://bugzilla.mozilla.org/show_bug.cgi?id=1580169
        Yes, of course, specialized hardware exists for a reason. Yet, as other have pointed out, we have much weaker hardware (RPi) decoding video without issues, yet PC having an order of magnitude faster hardware will choke on a few streams in the absence of hardware decoding.
        Decoding video id far from being my strong point, but it's pretty obvious something's amiss here.
        And don;t get me started on Windows, where a fairly powerful laptop cannot output smooth video, no matter the amount of hardware decoding, because DPC woes...

        Comment


        • #64
          Originally posted by curfew View Post
          They switched to Chrome because Firefox was shit. Now Firefox is getting better and the same users will switch back to Firefox unless Chrome catches up. Simple.
          everything works in powerpoint. wake me up when firefox regains marketshare in reality

          Comment


          • #65
            Originally posted by caligula View Post
            Apparently there's a language barrier here. What I meant is, 7nm CPU is more power efficient than say 40nm CPU (original RPi process node).
            what i'm trying to say is that rpi does not do video decoding on cpu. that's why comparing their process nodes is silly. compare process nodes of your cpu and some videocard and think why you can't play modern game without videocard
            Originally posted by caligula View Post
            In a similar way, 7nm GPUs are more power efficient than 40nm GPUs and 7nm DSPs are more power efficient than 40nm DSPs. So the advances in process node technology can benefit all types of video decoding.
            sure. new rpi can have faster decoding than old rpi. new intel cpu can have faster decoding than old intel cpu. and new intel cpu can have worse decoding than old rpi because cpus suck at video decoding
            Originally posted by caligula View Post
            Irrelevant. I wasn't claiming anything like that. My claim was, if a 8 year old $25 computer could decode H.264, a modern $1000 computer should easily be able to decode the same video efficiently, thanks to multiple improvements in hardware technology.
            my claim is you are imbecile who can't understand that specialized circuits exist exactly because they are faster than modern $1000 computers
            Originally posted by caligula View Post
            Intel's latest desktop arch (Comet Lake) is still at 14nm.
            exactly, intel is selling you 6 year old shit
            Originally posted by caligula View Post
            Not really - the modern notebooks are so powerful you can do everything the original RPi does without any kind of hardware acceleration..
            moron, your notebook can't be both faster than rpi at videodecoding and using 70% of cpu for 720p

            Comment


            • #66
              Originally posted by curfew View Post
              They switched to Chrome because Firefox was shit. Now Firefox is getting better and the same users will switch back to Firefox unless Chrome catches up. Simple.

              Firefox has taken huge steps forward since Fx 75 and finally I can feel satisfied personally.
              I'm one of the "same users" and I'm not going to switch back to Firefox as Vivaldi fulfills all of my needs (and more, once M3 will land, which could be any day now according to Jon).

              So no, not *every* former Firefox user will switch back.

              Comment


              • #67
                Originally posted by bug77 View Post
                That still works out to 15*1920*1080 or ~31MFLOPS every frame (~16ms). A CPU shouldn't even notice that, especially with SIMD.
                Of course, it's not just the transformation, so I'd accept a 5-10% CPU overhead. Anything on top of that, just screams of sloppy programming somewhere in the stack.
                Uhm, no... this is just plain wrong. FLOPS means FLoating-point Operations Per Second and the burden does not become less, just because you look at a smaller time-interval (hint: the CPU will also only have 16ms worth of processing time per frame ).

                Yes, SIMD and other tricks helps - but it doesn't change the fact that video processing is brutally expensive and dedicated HW processing can be much more efficient than the CPU.

                Comment


                • #68
                  @pal666
                  Let's hope they don't pull the plug on xorg before Wayland is actually usable.
                  I did use gentoo from 2003 to 2020, I actually switched less than a month ago.

                  I did try some other distrib on my system and I did consider Fedora F32.
                  Fedora F32 discalified after less than two hours : Reason : Wayland.

                  * Mouse cursor getting stuck
                  * Mouse cursor disaspering
                  * crash
                  * Game could not detect resolution of my main display (I have a 4K screen surrounded by 3 HD screens) and none of my game would allow 4K resolution.

                  Video card is a RX5700 and it work just fine on Xorg since months.

                  PS : Don't tell me that I can switch back to Xorg on Fedora, I know, but I wanted a distrib usable in his default and supported mode.

                  Finally ended installing Artix (OpenRC flavor with Plasma).

                  Comment


                  • #69
                    Originally posted by RavFX View Post
                    @pal666
                    Let's hope they don't pull the plug on xorg before Wayland is actually usable.
                    I did use gentoo from 2003 to 2020, I actually switched less than a month ago.

                    I did try some other distrib on my system and I did consider Fedora F32.
                    Fedora F32 discalified after less than two hours : Reason : Wayland.

                    * Mouse cursor getting stuck
                    * Mouse cursor disaspering
                    * crash
                    * Game could not detect resolution of my main display (I have a 4K screen surrounded by 3 HD screens) and none of my game would allow 4K resolution.

                    Video card is a RX5700 and it work just fine on Xorg since months.

                    PS : Don't tell me that I can switch back to Xorg on Fedora, I know, but I wanted a distrib usable in his default and supported mode.

                    Finally ended installing Artix (OpenRC flavor with Plasma).
                    To some extent plug has already been pulled on xorg a year quite a long time ago. Ask yourself when the latest release branched from master has been done.

                    Comment


                    • #70
                      Originally posted by pal666 View Post
                      everything works in powerpoint. wake me up when firefox regains marketshare in reality
                      Do you use only softwares that have the better market share? Switch to Windows 10.

                      Comment

                      Working...
                      X