Page 3 of 3 FirstFirst 123
Results 21 to 27 of 27

Thread: Video Acceleration Takes The Backseat On Chrome For Linux

  1. #21
    Join Date
    Mar 2010
    Posts
    7

    Default Google philosophy

    Google has a philosophy of making everything idiot-proof. See https://code.google.com/p/chromium/i...tail?id=331072 They won't add local printing to Chrome OS, even as an advanced feature for highly qualified administrators. This looks a lot like more of the same.

    I'd like to see Google reward manufacturers who ship validated implementations of open standards, even if it means users of some hardware are left disappointed.

  2. #22
    Join Date
    Jan 2009
    Posts
    1,303

    Default

    [QUOTE=cynical;402696]I think the main issue is not video acceleration by itself, since that can be handed off to gstreamer like Firefox does on linux systems. The problem is more that Chrome is also using gpu acceleration for their UI and they can't guarantee it will work bug-free (imho) primarily due to FGLRX. I can't wait for the radeon driver to kill it so we can finally get decent third party support on Linux. If Nouveau could catch up and Intel would drop vaapi we would be in paradise >

  3. #23
    Join Date
    Apr 2009
    Posts
    6

    Default

    Quote Originally Posted by robclark View Post
    Well, it's not entirely unprecedented for an operating system to evolve new API's, technologies, infrastructures, etc, as hardware evolves. Windows has done it. Macosx has done it. Unless you have a crystal ball, you aren't really going to know enough about 5 or 10 years from now (let alone 30 years) what hardware will look like and what crazy peripherals will exist in order to design something that lasts for ever. Some people call it progress. Others complain on forums ;-)



    GEM is an infrastructure for sharing buffers. Intel/radeon/nouveau drivers all use it.

    EXA/UXA/SNA on the other hand are acceleration APIs within the xserver. And in fact UXA and SNA are just private inside intel's ddx driver.

    I expect windows or macosx have their fair share of three-letter-acronyms. If you all put them in one big pile, it will look like a mess too. :-)



    I suppose the difference compared to proprietary operating systems is that you see the entire development phase from conception to adoption vs abandonment. Well, this is just speculation, since I don't know internals of microsoft/apple, but I would assume they don't announce some new project until it is reasonably far in the development process.

    Anyways, the "problem" with wayland is really just that x11 has been approximately good enough to live with.. and so more focus was placed on getting things right, consideration of lots of different use cases (there *is* more than just desktop these days), etc, rather than getting it out the door yesterday. In the long run, it's a good thing. You have to live with API's for a much longer time than you spend designing them.



    I don't think it is quite as bad as it sounds. For the open src drivers, there are only two: intel with vaapi and radeon+nouveau with gallium (with user visible API, ie. xvmc/vdpau/omx being provided as gallium state trackers). You are not allowed to count nvidia/amd/other's closed source implementations if making comparisions between proprietary vs free.
    It's good to have an evolution and in an ideal world all drivers would be open source so you can count on most of them to have adopted the latest patch made for Xorg or even the latest technology that replaces it. Unfortunately that's not the case and when you're a company making software that should run on Linux, you have to take into account the free drivers and the propietary ones and all the standards they support or not and all the bugs they have when supporting those standards. It's not an easy task. On the propietary side (Windows and OSX) there is a unified system, everything uses the latest technology or at least the old one in the transition time (which usually is much shorter than in Linux) so basically you don't have to fight with 20 implementations of the same, each one of them made by every vendor of graphic cards.

  4. #24
    Join Date
    May 2011
    Posts
    1,441

    Default

    That's just a very bad excuse.

    Has anybody looked at the Android code where they handle video acceleration? They've pretty much worn out the if-else statements for the ten gazillion SoCs they have to support on that platform.

  5. #25
    Join Date
    Sep 2011
    Posts
    218

    Default

    Quote Originally Posted by johnc View Post
    That's just a very bad excuse.

    Has anybody looked at the Android code where they handle video acceleration? They've pretty much worn out the if-else statements for the ten gazillion SoCs they have to support on that platform.
    so, android standardizes on openmax for video accel (encode and decode). Going back a few years, it *used* to be that vendors would just hack up the stagefright layer to make it work with the quirks of their hardware / openmax implementation. I guess the rough equivalent in this scenario would be if nvidia/amd/etc had their own customized version of chromium.

    I've been further away from android development for the last years, so I could be wrong, but my understanding is that google was trying to get away from this and limit the ability of vendors to hack up the middleware.. although I'm sure there is plenty of 'if (vendor == FOO) ..' type stuff in stagefright. (Again, equivalent to having to put some 'if (vendor == NVIDIA) ..' workarounds in chromium.)

    But yeah, it seems like a pretty poor excuse to me too. Android has more vendors with their own quirks to workaround, compared to different GPU vendors in the desktop space.

  6. #26
    Join Date
    Apr 2014
    Posts
    70

    Default

    Quote Originally Posted by robclark View Post
    so, android standardizes on openmax for video accel (encode and decode). Going back a few years, it *used* to be that vendors would just hack up the stagefright layer to make it work with the quirks of their hardware / openmax implementation. I guess the rough equivalent in this scenario would be if nvidia/amd/etc had their own customized version of chromium.

    I've been further away from android development for the last years, so I could be wrong, but my understanding is that google was trying to get away from this and limit the ability of vendors to hack up the middleware.. although I'm sure there is plenty of 'if (vendor == FOO) ..' type stuff in stagefright. (Again, equivalent to having to put some 'if (vendor == NVIDIA) ..' workarounds in chromium.)

    But yeah, it seems like a pretty poor excuse to me too. Android has more vendors with their own quirks to workaround, compared to different GPU vendors in the desktop space.
    I think this is a great article: https://dolphin-emu.org/blog/2013/09...all-fameshame/ Kinda explains a lot. What I find most interesting is that adreno is (a) the most popular GPU, and (b) has some of the shittiest drivers. They should fix that.

    On Kubuntu 13.10 it looks like all acceleration is enabled on my Chrome Version 34.0.1847.116 running nVidia 650Ti Boost (great card for the money). VDPAU seems to be handling youtube decoding, as the CPU shows hardly any usage difference when viewing a full-screen HD video. I did go to chrome://flags and turn on all acceleration options (from "default" which tells me nothing, grrr). I don't see a difference in performance though - everything is still plain fast. Perhaps certain nVidia cards have been white-listed after all?


    Cheers!

  7. #27

    Default

    Right click on the video, "stats for nerds" and there it says if hardware decoding is working or not.
    It should read something like:
    Code:
    accelerated video rendering, hardware video decoding
    VP8/9 videos (html5) are NOT decoded by hardware btw (no hardware decoding exists), only the h264 streams that come only via the flash player.

    At least on the Adobe version of flash VDPAU works even with the opensource radeon drivers IF its enabled in /etc/adobe/mms.cfg
    , but it breaks the plugin on many other pages (crashes). Back when i had a 8200 nvidia it was exactly the same behavior btw.

    BTW it seems that the Adobe plugin handles hardware accelerated playback just fine, only Google's version chokes (as in "uses 200-300% CPU and its laggy").
    But 3d acceleration with webgl is crap too. It uses insane amounts of CPU to lag like hell.

    So, in that link they say mesa is ok for them. Why their own plugin+webgl works like crap with it then (latest git kernel/mesa/xf86-ati version on Radeon HD 8570D)?
    Once more: the same webgl demos work just perfect in Firefox/Seamonkey with very low CPU usage with the same hardware/d.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •