Announcement

Collapse
No announcement yet.

Firefox 80 To Support VA-API Acceleration On X11

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    Originally posted by caligula View Post
    With each new CPU generation they advertise how it consumes 50% less power while providing 50% more computational power. So basically the 8 year old 10W devices could do this, now you have 2**8 = 256 times better power efficiency.
    lol, you are taking advertisements too seriously

    Comment


    • #52
      Originally posted by Vistaus View Post
      Impossibru. According to a lot of Phoronix members, this would be a waste of time and resources and Mozilla wasn't working on this.
      all of that is true. it was a waste of time and resources and mozilla wasn't working on it. but then omnibenevolent redhat decided to waste some time and resources and do it instead of mozilla. now your joke looks silly, doesn't it?

      Comment


      • #53
        Originally posted by caligula View Post
        It doesn't matter if you decode using GPGPU cores or DSP, the process node advancements still apply. I'm just saying super low power devices could decode H264 already 8 years ago. Now I have a fairly recent 14nm Intel chipset and the CPU load is around 70% (single core) when playing 720p H264 in Firefox.
        you are being silly. process node advancements apply to progress from 8 years old intel cpu to 14nm intel cpu. but 8 year old intel cpu wasn't able to play video(and btw intel's 14nm is 6 years old) . because it does matter whether you use specialized circuits to decode or not
        Originally posted by caligula View Post
        I'm pretty sure the $1000 laptop is better than first gen RPi in all possible ways. Still the power consumption is much higher when watching Youtube.
        because you are dead wrong. it is better in many ways, but it is worse in hardware video decode way(especially when hardware video decoding parts of your laptop aren't used)
        Last edited by pal666; 07-05-2020, 07:35 AM.

        Comment


        • #54
          Originally posted by bug77 View Post
          There's no way 9 multiplications and 6 additions need that much extra CPU time.
          until you multiply it by width and by height and by fps

          Comment


          • #55
            Originally posted by 240Hz View Post
            But xOrG iS abAndoNEd
            it is. subj was done by same redhat dev who 4 months ago said he has no interest in doing it for x11. when redhat pulls its last resources off x11, nobody else will remain on x11. no amount of kde user screams will produce one x11-related patch

            Comment


            • #56
              Originally posted by horizonbrave View Post
              What this brings to the table? Just a bit of power efficiency for laptop users??
              even some desktops can't play even hd video on cpu(without acceleration). with acceleration they will easilly play fullhd and more. then you have power efficiency. lack of which translates into money for electricity and noise for fans even on desktops

              Comment


              • #57
                Originally posted by ezst036 View Post
                To my knowledge(which is very limited here) the UVD is more efficient than than the GPU, and even vastly moreso than the CPU.
                gpus have uvd or something similar just like apus. maybe by gpu you mean shaders

                Comment


                • #58
                  Originally posted by remenic View Post
                  KDE user here, but not sure what you're talking about.
                  he is talking about kde's broken wayland support(that's why poor kde users need subj)

                  Comment


                  • #59
                    Originally posted by horizonbrave View Post
                    Sorry I missed the memo and I'm dumb as fuck. What this brings to the table? Just a bit of power efficiency for laptop users??
                    Thanks
                    Freeing the CPU from some tasks usually yields a smoother experience. Also, when playing back several streams, without hardware acceleration even a modern CPU will choke. Fast.

                    Originally posted by Veto View Post

                    Well, let's have a look at your assertion: That is 15 floating point operations per pixel you show there. So you need 15*1920*1080*60 = 1 866 240 000 or approximately 2 GFLOPS just to do a simple YUV conversion on your CPU. For 4k video that will be 7½ GFLOPS...

                    Of course a real implementation will apply some tricks, but still... There is a reason why specialized hardware is a win when doing video conversions!
                    That still works out to 15*1920*1080 or ~31MFLOPS every frame (~16ms). A CPU shouldn't even notice that, especially with SIMD.
                    Of course, it's not just the transformation, so I'd accept a 5-10% CPU overhead. Anything on top of that, just screams of sloppy programming somewhere in the stack.

                    Comment


                    • #60
                      Originally posted by pal666 View Post
                      it's the other way around. market share is a result of user choice. i.e. everyone already switched to chrome and improvements in firefox will not affect majority of users
                      They switched to Chrome because Firefox was shit. Now Firefox is getting better and the same users will switch back to Firefox unless Chrome catches up. Simple.

                      Firefox has taken huge steps forward since Fx 75 and finally I can feel satisfied personally.

                      Comment

                      Working...
                      X