Announcement

Collapse
No announcement yet.

Firefox 80 To Support VA-API Acceleration On X11

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • pal666
    replied
    Originally posted by ezst036 View Post
    To my knowledge(which is very limited here) the UVD is more efficient than than the GPU, and even vastly moreso than the CPU.
    gpus have uvd or something similar just like apus. maybe by gpu you mean shaders

    Leave a comment:


  • pal666
    replied
    Originally posted by horizonbrave View Post
    What this brings to the table? Just a bit of power efficiency for laptop users??
    even some desktops can't play even hd video on cpu(without acceleration). with acceleration they will easilly play fullhd and more. then you have power efficiency. lack of which translates into money for electricity and noise for fans even on desktops

    Leave a comment:


  • pal666
    replied
    Originally posted by 240Hz View Post
    But xOrG iS abAndoNEd
    it is. subj was done by same redhat dev who 4 months ago said he has no interest in doing it for x11. when redhat pulls its last resources off x11, nobody else will remain on x11. no amount of kde user screams will produce one x11-related patch

    Leave a comment:


  • pal666
    replied
    Originally posted by bug77 View Post
    There's no way 9 multiplications and 6 additions need that much extra CPU time.
    until you multiply it by width and by height and by fps

    Leave a comment:


  • pal666
    replied
    Originally posted by caligula View Post
    It doesn't matter if you decode using GPGPU cores or DSP, the process node advancements still apply. I'm just saying super low power devices could decode H264 already 8 years ago. Now I have a fairly recent 14nm Intel chipset and the CPU load is around 70% (single core) when playing 720p H264 in Firefox.
    you are being silly. process node advancements apply to progress from 8 years old intel cpu to 14nm intel cpu. but 8 year old intel cpu wasn't able to play video(and btw intel's 14nm is 6 years old) . because it does matter whether you use specialized circuits to decode or not
    Originally posted by caligula View Post
    I'm pretty sure the $1000 laptop is better than first gen RPi in all possible ways. Still the power consumption is much higher when watching Youtube.
    because you are dead wrong. it is better in many ways, but it is worse in hardware video decode way(especially when hardware video decoding parts of your laptop aren't used)
    Last edited by pal666; 05 July 2020, 07:35 AM.

    Leave a comment:


  • pal666
    replied
    Originally posted by Vistaus View Post
    Impossibru. According to a lot of Phoronix members, this would be a waste of time and resources and Mozilla wasn't working on this.
    all of that is true. it was a waste of time and resources and mozilla wasn't working on it. but then omnibenevolent redhat decided to waste some time and resources and do it instead of mozilla. now your joke looks silly, doesn't it?

    Leave a comment:


  • pal666
    replied
    Originally posted by caligula View Post
    With each new CPU generation they advertise how it consumes 50% less power while providing 50% more computational power. So basically the 8 year old 10W devices could do this, now you have 2**8 = 256 times better power efficiency.
    lol, you are taking advertisements too seriously

    Leave a comment:


  • pal666
    replied
    Originally posted by marco-c View Post
    As a user, what do you care about the market share? Why would you choose a browser based on its market share?
    it's the other way around. market share is a result of user choice. i.e. everyone already switched to chrome and improvements in firefox will not affect majority of users

    Leave a comment:


  • pal666
    replied
    Originally posted by birdie View Post
    Adobe Flash player
    is not a browser. you could also list addons which download video and run mplayer
    Originally posted by birdie View Post
    then everyone rushed to kill off Flash
    everyone including adobe itself. obviously they were wrong because they made birdie unhappy

    Leave a comment:


  • Veto
    replied
    Originally posted by bug77 View Post
    There's no way 9 multiplications and 6 additions need that much extra CPU time. Though back in the day, I remember Adobe complaining about YUV as well...
    Well, let's have a look at your assertion: That is 15 floating point operations per pixel you show there. So you need 15*1920*1080*60 = 1 866 240 000 or approximately 2 GFLOPS just to do a simple YUV conversion on your CPU. For 4k video that will be 7½ GFLOPS...

    Of course a real implementation will apply some tricks, but still... There is a reason why specialized hardware is a win when doing video conversions!
    Last edited by Veto; 05 July 2020, 04:00 AM. Reason: Added 4k

    Leave a comment:

Working...
X