Announcement

Collapse
No announcement yet.

Mozilla's Servo Is Whooping The Other Browsers In Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #71
    Tried playing with Rust for some simple projects. Ended up with 90% of the time working around the lifetimes, pointers, ownerships, deciphering error messages, etc and 10% with actual tasks and algorithms.
    The promises are bold but the language is way too complicated. The syntax is quite unreadable as well.

    Comment


    • #72
      Originally posted by pal666 View Post
      if you can enforce thread safety at compile time, code in question is sequential
      benchmark where c++ is slower than c is laughable
      This is not really the issue. The difficulties in "the standard programming model" are not so much concurrency as mutable shared memory. The problem is solved not by ditching concurrency but by dramatically controlling mutable shared memory.
      But to do this, of course, requires a language that can't randomly write anything anywhere (so it doesn't by default have to assume memory is shared), that "tags" memory usability (so it doesn't by default have to assume memory is mutable) and that provides communication primitives (eg some sort of message-passing) that are so easy to use that they are going to be used instead of relying on mutable shared state.

      If you imagine Rust or Swift or C# as just a kind of funny C++ that has taken away all your toys (ie your ability to create, modify, cast and read to/write from random pointers) you're missing the point in that all of them are (more or less) trying to deal with the above issue while not getting in your way. The further you go to functional languages, of course, the more obvious you are making this point. But most people seem to find it hard to think in functional language terms, so each of Rust, Swift, C# are (as far as I can tell) trying to allow you to think in a C/C++-like way, while still more or less enforcing memory safety and encouraging your threads to manipulate each other through means other than shared mutable state. (Admittedly for Swift this is my guess because the Swift concurrency model has not yet been released.)

      Comment


      • #73
        Originally posted by atomsymbol

        I don't know the answer to that, because I haven't been deeply thinking about Rust compilation yet.
        I was primarily interested in how this thread views type systems because there seems to be a perception that rust's resource safety properties will incur overhead or somehow cause the average program to be slower. This won't be the case because it's a property that emerges from the type system and types are erased (I believe some reflection exists but its existence incurs no time overhead). I would actually expect to get faster programs on average since strong aliasing constraints mean you can more aggressively apply interesting optimizations such as polyhedral based loop optimization. Maybe this is obvious to everyone and what's really being argued is whether the constraints imposed by the type system causes some wide or interesting class of algorithms/problems to be expressed in an inefficient way (e.g. inductive graphs in haskell)---I don't know. If that were the case then something can be written with unsafe and then a safe api can be exposed (like graphs in haskell using ST). However I don't think this is the case since I've seen mentions of unique_ptr and shared_ptr.

        Unrelated to this conversation I agree that it's unlikely servo is fast purely due to rust. However I would expect it to have fewer resource related bugs due to the use of rust which, from my own reading, appears to literally be a motivation for girard in developing linear logic.

        Comment


        • #74
          Originally posted by pal666 View Post
          well, i listed some of them. c++ exists, rust is a research project. and all this is only relevant if we consider rust be "better c++ without downsides" language-design wise, which it is not. nobody is going to rewrite all existing code in rust. with time c++ will get all good features of rust or any other language which do not compromise c++'s design goals
          Rust being a research project and the widespread use of C++ are arguably cultural problems. I don't mean to discredit them and I think the cultural aspect is important in the wider discussion of whether rust might be successful; however I don't think they can count against rust as a technology. Fortunately, at least in the case of C, the ffi is reasonably comfortable and automatic.

          I'm curious why you don't believe rust to be an improvement (technologically) on C++ (not necessarily that it is "better C++", but more a superior choice for low-level development)? The C++ spec is not good (too complicated and ambiguous which is why different compilers can give different results) and it inherits many of C's peculiarities such as integer handling which results in many bugs. These aren't the sorts of problems that can be easily fixed by a version bump. For instance if you want to extend the type-system to provide interesting safety guarantees you need to work around the existing system which likely means your extension won't be possible. Even very natural, well fleshed out proposals such as concepts (similar to traits and still not a part of C++) can fail due to strange interactions with other features.

          As an aside, some static analyzers for C++ use a type based approach to attack problems. The problem is that there are often cases where there's not enough information to infer the type of something required so the tool/checker falls over. This is why typing is most useful when exposed to the programmer so they can help fill in details of some proof required by the checker that can't be automatically constructed.

          EDIT: I'd mentioned previously I'm not moving away from C++ just yet and thought I should note it's because rust effectively doesn't support libraries which I find completely crazy.
          Last edited by codensity; 17 March 2016, 06:41 AM.

          Comment


          • #75
            Originally posted by unixfan2001 View Post
            Well. That's a very interesting concept. SystemCrasher, once again, belittleing other posters. Are you somehow related to pal666 or is that one of your sock puppets?
            You're bad at conspiracy theories. What if I've overtaken whole Phoronix and unixfan2001 is my sockpuppet either? Wouldn't it be logical to make sockpuppet to disagree you all the time to pretend it wasn't sockpuppet at all?

            Certain low-level thread interactions can get more difficult to write using that particular API. Yes.
            OpenMP is quite simple, and if you're up for being multithreaded, you better to inform yourself about some things anyway.

            Not to mention that a language which implements concurrency in its core library is preferable over one that has to use third party solutions.
            On other hand, It limits areas where it going to be applicable. Being optional part allows one to get language small, simple and very predictable when they need it.

            What do you mean by that? In what context can't they use Linux containers?
            Chromium (and chrome) is using clone() with appropriate flags for a while. It is two-fold approach.
            1) Browser architecture of chrome is friendly to being split into separate logically-isolated areas, where everything is forbidden except few possible inter-process communications. This way it happens to be quite sturdy, failure of one part does not brings other parts down and break-in only compromises boring container which is mostly useless.
            2) Chromium has actually implemented small launcher code, it brings browser up on Linux. It starts browser parts and splits things into containers by calling clone() as appropriate, locking various browser parts. No user actions required - browser knows how to split and lock itself on its own. Very good, technically sound property.

            Mozilla is nowhere close to it so far. Chrome/chromium are doing that like 5 years or more in Linux. Mozilla does ... some weird crap and third-rate marketing BS instead. Sorry, but I do not consider Mozilla to be on par at this point, when it comes to techs used.

            They can and do, actually. Particularly since Youtube no longer requires Flash.
            Wrong, it still tears, stutters and exposes inadequate CPU usage. In HTML5 mode. When it comes to Linux support it hardly could be called great. Sure, Linux got some shortcomings, Xorg related cruft is more complicated than app devs may want it to be. Wayland would fix that, etc. But I still think Mozilla could do it a bit better. Say, instead of wasting resources on useless firefox OS stuff.

            I give you that the default prefs are, for the most part, horrible. A lot of things could be improved without even changing a single line of compilabe code, simply by choosing better prefs.
            There're still numerous issues. Yet, Mozilla preferred to waste resources on worthless stuff like firefox os, while their browser suxx in very basic use cases, got very poor security record yet no containers support, then they want to kill customization which was the whole point, lock down ecosystem to degree it gone much worse than Google. Why someone supposed to use apps like this at all?

            1. What keeps you from doing the same in Firefox?
            FF browser core was not designed with split isolated processes + IPC in mind. Doing it myself is going to be extremely daunting task. Mozilla corp had chance to do it with their resources, but they preferred to waste their resources on firefox os and somesuch. The result? Crappy browser where it wasn't completed, third-rate OS, some borked engine and unpopular language. What a pile of smouldering wreck.

            2. LXC doesn't protect you from the two main issues.
            a) People can steal your online data/identity.
            b) X11 is still one huge attack vector (which is why you'd need to run it inside xpra or Wayland to be truly secure).
            Speaking for myself,
            1) I only keep online what is supposed to be public. Little point to "steal" that. I do not store passwords, browser is quite sclerotic, and there is very little to steal unless you can bypass containers, which is entirely different story.
            2) I would agree about X11, it nasty. Yet it is nice to lock down unneeded syscalls, revoke access to unnecessary files, ensure browser can't write anyhwere but few well defined locations and so on. Chrome is much better in this regard to the date. Not perfect, but better than nothing.

            Sure, one can use "external" means to lock down firefox and somesuch to container or VM, BUT still, it takes extra actions, not a mozilla's achievement and it would be somewhat less efficient due to lack of browser parts separation. Attacker could grab data of all web sites it does not belongs to. It could be better than that.

            Epic compared to what? The Chrome zero day explot in late 2015? What's your point? Do you only trust "flawless" companies/individuals?
            That was one of few major exploits for firefox or chrome ever spotted in the wild at full swing. It lead to stealing of sensitive private data like ssh keys and passwords, therefore opening venue for future attacks, while attack vector could remain unknown. Virtually all mozilla users are at risk, attack scripts were running on many major sites for days or weeks, downloading everything of interest users have got to their shady C&C servers. Since Mozilla runs unconfined, scripts were able to access everything user could access, which is very nasty - people have to assume all their keys, documents and passwords were stolen. If someone stores credit card data for online purchases and gives obvious names to their files, they can expect their credit card is compromised either. Scripts have a long, fancy list of things they are interested in. When everyone would forget about it, shady persons would return and strike back, looting money, gaining further access to remote systems and exploiting everything they can. That's how Internet works.

            What makes you think Youtube won't work well, when everything else is already properly GPU accelerated?
            I'm yet to see something from Mozilla which is PROPERLY accelerated. Somehow, so far, vlc or mplayer are MUCH better when it comes to cpu usage or getting rid of tearing. Strange, isn't it? And, erm, I'm not really sure: what exactly has prevented Mozilla from using GPU acceleration for a while? They do webGL for several years, etc. Furthermore, GPU acceleration is wide term. It can e.g. assume VDPAU or VA-API output and decode acceleration, which is yet to be seen. VDPAU or VA-API video output perform much better, if hardware got separate plane for video, and video decode relieves CPU, if hardware decoder understands current video format. Not a big deal at Internet bitrates, but still, it could be nice if browser uses hardware capabilities you've got, no? Not to mention tear-free playback, etc.
            Last edited by SystemCrasher; 17 March 2016, 10:44 AM.

            Comment


            • #76
              Originally posted by SystemCrasher View Post
              You're bad at conspiracy theories. What if I've overtaken whole Phoronix and unixfan2001 is my sockpuppet either? Wouldn't it be logical to make sockpuppet to disagree you all the time to pretend it wasn't sockpuppet at all?
              Wouldn't be a conspiracy theory (of which I'm all but a fan of), since you and pal666 have a very similar style (starting every reply with a barrage of insults)

              On other hand, It limits areas where it going to be applicable. Being optional part allows one to get language small, simple and very predictable when they need it.
              I beg to differ. This is very similar to the sync/async debate. Following a particular style is more predictable than urging the user to stack a bunch of third party libraries on top.

              Chromium (and chrome) is using clone() with appropriate flags for a while. It is two-fold approach.
              1) Browser architecture of chrome is friendly to being split into separate logically-isolated areas, where everything is forbidden except few possible inter-process communications. This way it happens to be quite sturdy, failure of one part does not brings other parts down and break-in only compromises boring container which is mostly useless.
              2) Chromium has actually implemented small launcher code, it brings browser up on Linux. It starts browser parts and splits things into containers by calling clone() as appropriate, locking various browser parts. No user actions required - browser knows how to split and lock itself on its own. Very good, technically sound property.

              Mozilla is nowhere close to it so far. Chrome/chromium are doing that like 5 years or more in Linux. Mozilla does ... some weird crap and third-rate marketing BS instead. Sorry, but I do not consider Mozilla to be on par at this point, when it comes to techs used.
              Not sure what this has to do with Linux containers.

              http://www.tutorialspoint.com/unix_s...alls/clone.htm

              clone() is a performance feature, not necessarily a security feature.

              Wrong, it still tears, stutters and exposes inadequate CPU usage. In HTML5 mode. When it comes to Linux support it hardly could be called great. Sure, Linux got some shortcomings, Xorg related cruft is more complicated than app devs may want it to be. Wayland would fix that, etc. But I still think Mozilla could do it a bit better. Say, instead of wasting resources on useless firefox OS stuff.
              Doesn't tear or stutter for me. Not with one video running in fullscreen anyways.

              There're still numerous issues. Yet, Mozilla preferred to waste resources on worthless stuff like firefox os, while their browser suxx in very basic use cases, got very poor security record yet no containers support, then they want to kill customization which was the whole point, lock down ecosystem to degree it gone much worse than Google. Why someone supposed to use apps like this at all?
              They aren't killing customisation. They're killing XUL/XPCOM. Something I definitely don't agree with either. But customisation itself is pretty much untouched and easier than before.

              FF browser core was not designed with split isolated processes + IPC in mind. Doing it myself is going to be extremely daunting task. Mozilla corp had chance to do it with their resources, but they preferred to waste their resources on firefox os and somesuch. The result? Crappy browser where it wasn't completed, third-rate OS, some borked engine and unpopular language. What a pile of smouldering wreck.
              Firefox OS wasn't a waste of resources. They learned quite a lot from that experience, that Servo is now building on.

              Speaking for myself,
              1) I only keep online what is supposed to be public. Little point to "steal" that. I do not store passwords, browser is quite sclerotic, and there is very little to steal unless you can bypass containers, which is entirely different story.
              2) I would agree about X11, it nasty. Yet it is nice to lock down unneeded syscalls, revoke access to unnecessary files, ensure browser can't write anyhwere but few well defined locations and so on. Chrome is much better in this regard to the date. Not perfect, but better than nothing.
              Actually, Chrome is much worse, as far as security and privacy are concerned. Not sure what "special" version of Google Chrome you're running there. Definitely sounds magic.


              That was one of few major exploits for firefox or chrome ever spotted in the wild at full swing. It lead to stealing of sensitive private data like ssh keys and passwords, therefore opening venue for future attacks, while attack vector could remain unknown. Virtually all mozilla users are at risk, attack scripts were running on many major sites for days or weeks, downloading everything of interest users have got to their shady C&C servers. Since Mozilla runs unconfined, scripts were able to access everything user could access, which is very nasty - people have to assume all their keys, documents and passwords were stolen. If someone stores credit card data for online purchases and gives obvious names to their files, they can expect their credit card is compromised either. Scripts have a long, fancy list of things they are interested in. When everyone would forget about it, shady persons would return and strike back, looting money, gaining further access to remote systems and exploiting everything they can. That's how Internet works.
              Thus is the nature of a zero day exploit. But tell me all about how Chrome's is supposedly less severe than Firefox.

              I'm yet to see something from Mozilla which is PROPERLY accelerated. Somehow, so far, vlc or mplayer are MUCH better when it comes to cpu usage or getting rid of tearing. Strange, isn't it? And, erm, I'm not really sure: what exactly has prevented Mozilla from using GPU acceleration for a while? They do webGL for several years, etc. Furthermore, GPU acceleration is wide term. It can e.g. assume VDPAU or VA-API output and decode acceleration, which is yet to be seen. VDPAU or VA-API video output perform much better, if hardware got separate plane for video, and video decode relieves CPU, if hardware decoder understands current video format. Not a big deal at Internet bitrates, but still, it could be nice if browser uses hardware capabilities you've got, no? Not to mention tear-free playback, etc.
              Point in case, Firefox has had GPU acceleration for a long while. It's just that Mozilla's leadership is rather orthodox when it comes to enabling it.
              Even WebGL blacklists are still a problem with Firefox.

              I give you that. Mozilla's leadership needs to improve and they need to stop declining patches that only run on one platform but not another (it took like four years before they'd give us transparent, non-rectangular chrome windows. Simply because they weren't easily supported by the same codepath on Windows).

              Comment


              • #77
                Originally posted by codensity View Post
                I'm curious why you don't believe rust to be an improvement (technologically) on C++ (not necessarily that it is "better C++", but more a superior choice for low-level development)?
                how can it be superior choice without being better?
                if it better in some arbitrary aspect, but worse in others, it is not superior choice, it is choice of marginals
                Originally posted by codensity View Post
                The C++ spec is not good (too complicated and ambiguous which is why different compilers can give different results)
                we have no rust spec at all, how could you compare one development compiler without spec with real iso standard with many mature implementations?
                Originally posted by codensity View Post
                and it inherits many of C's peculiarities such as integer handling which results in many bugs. These aren't the sorts of problems that can be easily fixed by a version bump.
                it inherints c problems not for fun, but because without backwards compatibility you can only get toy language like rust
                Originally posted by codensity View Post
                As an aside, some static analyzers for C++ use a type based approach to attack problems. The problem is that there are often cases where there's not enough information to infer the type of something required so the tool/checker falls over.
                if you are talking about static type, then it's a tool's fault. compiler knows it, tool could just ask compiler.

                Comment


                • #78
                  Originally posted by name99 View Post

                  This is not really the issue. The difficulties in "the standard programming model" are not so much concurrency as mutable shared memory. The problem is solved not by ditching concurrency but by dramatically controlling mutable shared memory.
                  But to do this, of course, requires a language that can't randomly write anything anywhere (so it doesn't by default have to assume memory is shared), that "tags" memory usability (so it doesn't by default have to assume memory is mutable) and that provides communication primitives (eg some sort of message-passing) that are so easy to use that they are going to be used instead of relying on mutable shared state.

                  If you imagine Rust or Swift or C# as just a kind of funny C++ that has taken away all your toys (ie your ability to create, modify, cast and read to/write from random pointers) you're missing the point in that all of them are (more or less) trying to deal with the above issue while not getting in your way. The further you go to functional languages, of course, the more obvious you are making this point. But most people seem to find it hard to think in functional language terms, so each of Rust, Swift, C# are (as far as I can tell) trying to allow you to think in a C/C++-like way, while still more or less enforcing memory safety and encouraging your threads to manipulate each other through means other than shared mutable state. (Admittedly for Swift this is my guess because the Swift concurrency model has not yet been released.)
                  1) c++ is perfectly functional language, so i don't get this 'c++ vs functional'
                  2) what is all this 'random writes' bullshit? you have problems without pointer arithmetic, just with plain assignments to member variables. in the end you can either pass objects via messages by value and suffer copying costs which will be huge in real programs, or pass them by reference and then you have your mutable shared state. c++ allows you to do both
                  Last edited by pal666; 20 March 2016, 12:35 PM.

                  Comment


                  • #79
                    Originally posted by Kristian Joensen View Post
                    Notice that word "should", what does that imply to you in this context?
                    it implies desire to live in land of unicorns and rainbows

                    Comment


                    • #80
                      Originally posted by unixfan2001 View Post
                      Wouldn't be a conspiracy theory (of which I'm all but a fan of), since you and pal666 have a very similar style (starting every reply with a barrage of insults)
                      Haha, I can assure you I have nothing to do with this pal666, except maybe sharing some views on some topics. I can't imagine sane reasons to do sockpuppeting on phoronix. Is it supposed to provide some measurable gain, worth of wasted time or what? At first glance, this idea looks pretty silly to me. Maybe your time costs absolutely nothing, or what, so you can imagine someone wasting their time so badly?

                      I beg to differ. This is very similar to the sync/async debate. Following a particular style is more predictable than urging the user to stack a bunch of third party libraries on top.
                      Feel free to proceed this way, but it still puts quite a major handicap on what one can and can't do. When you can't bring runtime and language down to simple and predictable constructs, it going eventually turn into limitation and play a poor joke when you need something a bit more advanced. So, the difference is: these pesky C/C++ gurus can do everything you can, and much more than that. On other hand, these l33t $cr1pt k1dd13z are quite limited in their abilities. Not like Mozilla or Google could help 'em too much.

                      Not sure what this has to do with Linux containers.
                      Of course you're not. Because you have no even slightest idea on how Linux containers are working and what clone does. Quoting me tutorials is pretty lame: I'm able to invoke system calls myself. Do you honestly believe I haven't read some fancy manuals myself before telling about it?

                      clone() is a performance feature, not necessarily a security feature.
                      Wrong, clone() is a superset of fork(), maybe somewhat inspired by syscalls of plan9. Everything related to processes and threads creation in Linux goes down to clone(), say, fork() is imlplemented as clone() with certain flags. Since processes are inherently separated and could enjoy by a different set of permissions, it also part of security features. OS where processes lack separation is inherently insecure, Win95 being a perfect example of this kind of "security".

                      Basically clone() is like this: you tell OS you want to split into two entities. And you tell what exactly you want to unshare in flags. Thread is a new lightweight process which shares most of memory with parent. Process does not shares memory and some other resources instead. The real difference are only few flags in same call. Over time clone has got more things to unshare and more flags. Most interesting are namespaces. One can unshare namespace of PIDs, making child process unable to operate on parent namespace PIDs. Same for users, mounts, net and so on. That is it: container gets own pid 1, own root user, own mounts, own net and so on. "Host" (upper part of hierarchy) can operate on full hierarchy. Child namespaces can't operate on upward entities though. I.e. contanier with own pid namespaces loses view of parent's processes and can't e.g. send signals to processes anyone above its namespace in hierarchy. Overall it is quite elegant and logical things to do.

                      Yet there is some catch: since Linux was not initially designed with such a features in mind, there're still some syscalls which could do something which is not in line with this idea. Most obvious example is setting time, i.e. container could potentially change host's time. Which isn't desirable for sure, but since syscalls are executed by the same kernel it would execute syscalls from container either, and if some syscall isn't aware of namespaces, it could get somewhat nasty. So containers aren't as secure as "full" VMs with separate kernel instances, independent by design. Yet, these are quite an improvement over "usual" state of things, at virtually no speed penalty, unlike full VMs.

                      To make it more funny, Linux got SECCOMP filter to chop out troublesome syscalls (say, browser does not needs to be able to set OS wallclock!) and CAPS to split permissions in more fine grained manner than just root and everyone else. Say, non-root user could be able to bind to ports below 1024, but will be unable to do anything else. Some programs are already usign this combo. E.g. tools like "firejail" are kicking the ass. To make it more asskicking, there're things like cgroups. So one can decide how much CPU, mem, net, ... containers can take. Say, in firejail one can limit "this set of programs" to "network speed of 1MiB/s". Dirt-simple yet efficient. Not to mention it could be own net with own IP addrss thanks to namespaces and one can even split their "eth0" into plenty of sub-interfaces (by macvlans, or by using bridge + veth pairs, etc). So full blown container can even use own "virtual ethernet", being unable to access host over network.

                      Doesn't tear or stutter for me. Not with one video running in fullscreen anyways.
                      On other hand I've plenty of configurations of fellow users where I'm not exactly happy with performance or provoked xorg cpu use, etc. This is partly xorg's fault, but Mozilla could be better at using hardware acceleration any day, as well as xorg extensions like xvideo/xv/xshm as well. Somehow, I've got fed up with numerous issues with Firefox. Another issue is they always breaking something, causing a lot of woes. Be it interface or features, extensions or plugins. Fuck that. While I'm hardly a big fan of google, chromium causes order of magnitude less issues.

                      They aren't killing customisation. They're killing XUL/XPCOM. Something I definitely don't agree with either. But customisation itself is pretty much untouched and easier than before.
                      Lol, if they're going to remove XUL and XPCOM and using Chrome extensions apis, it just kills last reasons to use Firefox at all. Chrome/chromium are already multi-processes, already got these apis, they are just as "customizable" as mozilla is going to be, can isolate into containers on linux, faster in js and overall ti is a production-quality thing, not some half-borked experimental engine. With some powerful features like nacl, and also not being inclined on ecosystem shit, so one can add any extension to chrome, without asking google to sign it, etc. Yeah, greedy mozilla fucks, take that.

                      Firefox OS wasn't a waste of resources. They learned quite a lot from that experience, that Servo is now building on.
                      So they learned nobody needs third-rate shit. They're going to learn it even harder.

                      Actually, Chrome is much worse, as far as security and privacy are concerned. Not sure what "special" version of Google Chrome you're running there. Definitely sounds magic.
                      This "special" version called "chromium" and is what supplied by most Linux distros by default. Sure, even Chromium could be a bit nasty in terms of privacy. But damn, Mozilla got like 30 (!!!) reasons to phone home. Reporting a plenty of data. You see, if we compare... google is less evil than than. Um, btw, Ubuntu is eager to get rid of Firefox for several Ubuntu versions already. I guess Mozilla can eventually get quite a major slap in the face, losing around 20M users at once. Looking on changelogs, it seems ubuntu devs already had some extra "fun" with extensions signing, so they would like Mozilla even more.

                      Thus is the nature of a zero day exploit. But tell me all about how Chrome's is supposedly less severe than Firefox.
                      Chrome is confined in Linux container, to begin with. Rogue code would see very boring system with few processes, almost no data, no programs and so on. Breaking out of container is entirely different story. OTOH, Firefox is completely pwned at this point. Not too much security, eh? Ironically, 0day greatly abused this fact. Chrome-like browsers on Linux would suffer much less, and google is rarely paying full reward, assuming breaking of all defence layers these days.

                      Point in case, Firefox has had GPU acceleration for a long while. It's just that Mozilla's leadership is rather orthodox when it comes to enabling it.
                      Even WebGL blacklists are still a problem with Firefox.
                      WebGL works virtually everywhere I've checked in firefox, at least in Linux. But somehow I'm not a big fan of performance and quality of Mozilla's video playback. And their brand new idea to supply Cisco's codec has been utter trashbin. Not just they download some 3rd party blob without user consent, it just suxx very hard when playing video. It like 2-3 times slower compared to ffpmeg, and like if it was not enough, if it can't cope with real time, it can't properly drop late frames either, resulting in a very jerky playback. Of course one can remove or disable stupid CISCO blob manually, but it was yet another nail into Mozilla's coffin. Mozilla is freakin' skilled in fucking up video playback in the web. They always invent some new, creative ways to screw up user experience and annoy systems administrators.

                      I give you that. Mozilla's leadership needs to improve and they need to stop declining patches that only run on one platform but not another (it took like four years before they'd give us transparent, non-rectangular chrome windows. Simply because they weren't easily supported by the same codepath on Windows).
                      I think they first may want to stop fucking up users, devs and admins, breaking user experience all the time, changing all sorts of crap while ignoring plenty of long-standing problems. Sure, chrome got own dark corners. But at least it does not breaks user interface every few months, do not install crappy cisco codecs, which warrant extremely jerky video playback everywhere including youtube, do not lock down extensions devs/users and so on.
                      Last edited by SystemCrasher; 21 March 2016, 09:35 AM.

                      Comment

                      Working...
                      X