Announcement

Collapse
No announcement yet.

Firefox 80 To Support VA-API Acceleration On X11

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    Originally posted by bug77 View Post
    There's no way 9 multiplications and 6 additions need that much extra CPU time.
    until you multiply it by width and by height and by fps

    Comment


    • #52
      Originally posted by 240Hz View Post
      But xOrG iS abAndoNEd
      it is. subj was done by same redhat dev who 4 months ago said he has no interest in doing it for x11. when redhat pulls its last resources off x11, nobody else will remain on x11. no amount of kde user screams will produce one x11-related patch

      Comment


      • #53
        Originally posted by horizonbrave View Post
        What this brings to the table? Just a bit of power efficiency for laptop users??
        even some desktops can't play even hd video on cpu(without acceleration). with acceleration they will easilly play fullhd and more. then you have power efficiency. lack of which translates into money for electricity and noise for fans even on desktops

        Comment


        • #54
          Originally posted by ezst036 View Post
          To my knowledge(which is very limited here) the UVD is more efficient than than the GPU, and even vastly moreso than the CPU.
          gpus have uvd or something similar just like apus. maybe by gpu you mean shaders

          Comment


          • #55
            Originally posted by remenic View Post
            KDE user here, but not sure what you're talking about.
            he is talking about kde's broken wayland support(that's why poor kde users need subj)

            Comment


            • #56
              Originally posted by horizonbrave View Post
              Sorry I missed the memo and I'm dumb as fuck. What this brings to the table? Just a bit of power efficiency for laptop users??
              Thanks
              Freeing the CPU from some tasks usually yields a smoother experience. Also, when playing back several streams, without hardware acceleration even a modern CPU will choke. Fast.

              Originally posted by Veto View Post

              Well, let's have a look at your assertion: That is 15 floating point operations per pixel you show there. So you need 15*1920*1080*60 = 1 866 240 000 or approximately 2 GFLOPS just to do a simple YUV conversion on your CPU. For 4k video that will be 7½ GFLOPS...

              Of course a real implementation will apply some tricks, but still... There is a reason why specialized hardware is a win when doing video conversions!
              That still works out to 15*1920*1080 or ~31MFLOPS every frame (~16ms). A CPU shouldn't even notice that, especially with SIMD.
              Of course, it's not just the transformation, so I'd accept a 5-10% CPU overhead. Anything on top of that, just screams of sloppy programming somewhere in the stack.

              Comment


              • #57
                Originally posted by pal666 View Post
                it's the other way around. market share is a result of user choice. i.e. everyone already switched to chrome and improvements in firefox will not affect majority of users
                They switched to Chrome because Firefox was shit. Now Firefox is getting better and the same users will switch back to Firefox unless Chrome catches up. Simple.

                Firefox has taken huge steps forward since Fx 75 and finally I can feel satisfied personally.

                Comment


                • #58
                  Originally posted by bug77 View Post
                  Freeing the CPU from some tasks usually yields a smoother experience. Also, when playing back several streams, without hardware acceleration even a modern CPU will choke. Fast.


                  That still works out to 15*1920*1080 or ~31MFLOPS every frame (~16ms). A CPU shouldn't even notice that, especially with SIMD.
                  Of course, it's not just the transformation, so I'd accept a 5-10% CPU overhead. Anything on top of that, just screams of sloppy programming somewhere in the stack.
                  That's assuming all the data is in L1 cache. But I think you are somewhat right - YUV - RGB translation apparently is not the main show stopper. It just adds to it and doing it on the GPU is more efficient. BTW, this motivated the the whole DMABUF implementation in the first place, see https://bugzilla.mozilla.org/show_bug.cgi?id=1580169

                  Comment


                  • #59
                    Originally posted by pal666 View Post
                    you are being silly. process node advancements apply to progress from 8 years old intel cpu to 14nm intel cpu.
                    Apparently there's a language barrier here. What I meant is, 7nm CPU is more power efficient than say 40nm CPU (original RPi process node). In a similar way, 7nm GPUs are more power efficient than 40nm GPUs and 7nm DSPs are more power efficient than 40nm DSPs. So the advances in process node technology can benefit all types of video decoding.
                    but 8 year old intel cpu wasn't able to play video.
                    Irrelevant. I wasn't claiming anything like that. My claim was, if a 8 year old $25 computer could decode H.264, a modern $1000 computer should easily be able to decode the same video efficiently, thanks to multiple improvements in hardware technology.
                    (and btw intel's 14nm is 6 years old)
                    Intel's latest desktop arch (Comet Lake) is still at 14nm.
                    it is better in many ways, but it is worse in hardware video decode way(especially when hardware video decoding parts of your laptop aren't used)
                    Not really - the modern notebooks are so powerful you can do everything the original RPi does without any kind of hardware acceleration. RPi might be able to do some low level real-time bit banging faster than Intel, but then again its GPIO isn't that fast.
                    Last edited by caligula; 05 July 2020, 10:03 AM.

                    Comment


                    • #60
                      Originally posted by treba View Post

                      That's assuming all the data is in L1 cache. But I think you are somewhat right - YUV - RGB translation apparently is not the main show stopper. It just adds to it and doing it on the GPU is more efficient. BTW, this motivated the the whole DMABUF implementation in the first place, see https://bugzilla.mozilla.org/show_bug.cgi?id=1580169
                      Yes, of course, specialized hardware exists for a reason. Yet, as other have pointed out, we have much weaker hardware (RPi) decoding video without issues, yet PC having an order of magnitude faster hardware will choke on a few streams in the absence of hardware decoding.
                      Decoding video id far from being my strong point, but it's pretty obvious something's amiss here.
                      And don;t get me started on Windows, where a fairly powerful laptop cannot output smooth video, no matter the amount of hardware decoding, because DPC woes...

                      Comment

                      Working...
                      X