Announcement

Collapse
No announcement yet.

Servo Driving Modularity To Support Different JavaScript Engines

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • plonoma
    replied
    Generally speaking modularity is usually good.
    Maybe the ability to use different JavaScript engines simultaneously could be developed?
    Maybe even using different JavaScript engines for different content in the same page?!

    Leave a comment:


  • Developer12
    replied
    Originally posted by timofonic View Post

    So no hopes to end the JavaScript hell of bugs?
    If you want totally, properly secure JS, you would need to sacrifice a significant chunk of performance and run with a non-JIT interpreter. The modern web punishes this heavily, with massively over-complicated webpages that do huge amounts of heavylift processing on the client side, just to assemble the layout of a page that is essentially static anyway.

    Chrome are making some musings about memory-isolated JS engines, but I don't think it's going to make any meaningful difference. At most it's a speedbump. The tools for sandboxing in the way they need don't really exist. The closest thing might be containers (which it sounds like they're NOT using), but those would be an extremely awkward solution that would be very hard to get right and would be hard to port between platforms anyway. (good luck getting containers to work on windows and provide meaningful security)

    Leave a comment:


  • Kver
    replied
    Originally posted by bug77 View Post
    I believe all that. But I still think there should be a better way.
    Gawds how I wish there could be. JS is so entrenched it's basically impossible to undo the bad decisions.

    If I were an evil dictator I'd spec "HTML6" as the divorce from the traditional HTML/JS/CSS trio as-is. Gear it all towards a breakaway simplified engine design with a goal of energy efficiency and minimizing requests. Fold the scripting and styling into a unified language, with some annotations. Remove all semantic tags from the standard, as well as script, style, and CSS link tags. Essentially, turn rendering engines into low-level build-a-spec systems where the unified stylescripts are HTML5 components on steroids.

    Then, over time, build a legacy renderer on top of the new renderer, loading a browsers built-in implementation of the old spec on the new spec when the doctype is old, running things like JS and CSS parsers in webassembly. Somewhat like how OpenGL runs on Vulkan via Zink.

    Good thing I'm not a dictator.

    Leave a comment:


  • Quackdoc
    replied
    Originally posted by timofonic View Post

    So no hopes to end the JavaScript hell of bugs?
    there will never be hope T.T

    Leave a comment:


  • timofonic
    replied
    Originally posted by Developer12 View Post

    Writing a JS engine in rust isn't sufficient when JIT is involved.

    While a rust non-JIT JS engine would probably be quite secure, JIT means dynamically generating x86 (etc) assembly code from JS code and then executing it. This assembly is highly optimized through the use of a lot of assumptions/predictions about the JS code which aren't universally valid, and which no sane compiler developer would include in a compiler for languages like C/C++/Rust/etc. Part of the reason this works is that for the weird corner cases where assumptions don't hold the generated assembly can be made to fall back to the slower non-JIT interpreter.

    The end result is that there have been many logic bugs in javascript engines' optimization passes that result in unsafe assembly code, memory corruption, and JS engine sandbox escape. Rust's compiler can protect against that stuff for rust code which is compiled up-front, but it can't provide any assurances for random JIT'd code that your code generates later. There also (in general) isn't really a lot of stuff you can do to prevent these kinds of logic bugs, even on a theoretical level. It took a long time for just the theory behind Rust's borrow checker to emerge.
    So no hopes to end the JavaScript hell of bugs?

    Leave a comment:


  • Developer12
    replied
    Originally posted by timofonic View Post
    What about making a JavaScript engine in Rust too? JavaScript is often the cause of many security issues....
    Writing a JS engine in rust isn't sufficient when JIT is involved.

    While a rust non-JIT JS engine would probably be quite secure, JIT means dynamically generating x86 (etc) assembly code from JS code and then executing it. This assembly is highly optimized through the use of a lot of assumptions/predictions about the JS code which aren't universally valid, and which no sane compiler developer would include in a compiler for languages like C/C++/Rust/etc. Part of the reason this works is that for the weird corner cases where assumptions don't hold the generated assembly can be made to fall back to the slower non-JIT interpreter.

    The end result is that there have been many logic bugs in javascript engines' optimization passes that result in unsafe assembly code, memory corruption, and JS engine sandbox escape. Rust's compiler can protect against that stuff for rust code which is compiled up-front, but it can't provide any assurances for random JIT'd code that your code generates later. There also (in general) isn't really a lot of stuff you can do to prevent these kinds of logic bugs, even on a theoretical level. It took a long time for just the theory behind Rust's borrow checker to emerge.

    Leave a comment:


  • Mathias
    replied
    Originally posted by ahrs View Post

    It's fine for Servo to do that but there is no point in Firefox doing that in my opinion. It has the effect of making web standards irrelevant and ruins one of the selling points of Firefox. It is one of the few remaining independent web browsers left on the planet, it is not just Chromium Blink and V8 with a different coat of paint. If V8 is better somehow then we should improve Spidermonkey.
    I really don't understand that argument. Who is profiting from another rendering/JS engine? Web-Developers? No. The end user (=me) gets a slower engine. (At least when I see a slow webpage, if I try it in chromium, it is much faster. Maybe the opposite is true as well sometimes.) It's not like Mozilla can push new web standards by themselves. JpegXL anyone?

    On the other hand, Mozilla could spend the money on other things. Things why I use Firefox, like privacy stuff.

    The only problem I see with an all-Blink/V8 solution, if Google removes usefull stuff or makes stuff really hard. Like ManifestV3 plugins. If rendering and Javascript engine are modular enough, that shouldn't be a big problem. Though at some point it might be easier to roll your own Engine then to backport stuff to another project.

    Leave a comment:


  • bug77
    replied
    Originally posted by Kver View Post

    The problem with JS is that the majority of the engine complexity is owed to the language itself, not the APIs. Those are actually the easiest bits to implement. The problem is Javascript builds classes via prototyping - the ability to modify an existing class during runtime - and everything is weakly typed. These two aspects combined makes writing compilers a living hell, and even if you want the bare basics it's the core nature of the language itself that's a pain in the ass... It's why V8 has such a radical JIT compilation pipeline with like a dozen internal sub-compilers. You could make a minimal engine that runs interpreted, but the performance would be so atrocious it would make no sense to use.

    When it comes to APIs, there's also just not a lot you can leave out, you'd be shocked how many will affect the layout of a site. Even leaving out something as mundane as the History API can break sites using it to detect templates to be loaded, and I've seen portfolio sites where the layout responds to a background video, and even Youtube runs full-on Shaders for the ambience effect. There's just not really a minimal implementation that wouldn't break a huge number of websites, especially the juggernauts, which would make it pointless for testing because complex sites are what you need to test the most. About the best you can do is drop APIs that require explicit permissions and just have them always return denied, but at that point you've already written 95% of a full ECMA 6 implementation.

    It seems silly that something so atrociously complex is part of a relatively simple markup standard, but that's the messy evolution of the internet fer ya.
    I believe all that. But I still think there should be a better way.

    Leave a comment:


  • ahrs
    replied
    Originally posted by Mathias View Post
    I wonder if this could be used as a guide on how to port V8 to Firefox.
    I'm not saying it should be done. But it is *interesting* how this small non-profit tries to compete with the biggest players, while all others gave up on their own development... I can totally see them switching to V8 or Blink or both for financial reasons...
    It's fine for Servo to do that but there is no point in Firefox doing that in my opinion. It has the effect of making web standards irrelevant and ruins one of the selling points of Firefox. It is one of the few remaining independent web browsers left on the planet, it is not just Chromium Blink and V8 with a different coat of paint. If V8 is better somehow then we should improve Spidermonkey.

    Leave a comment:


  • Kver
    replied
    Originally posted by bug77 View Post
    But that would only require supporting the small subset of JS that handles the registering (if that).

    JS can run wild and then tell the layout engine what to do. But that doesn't mean the layout engine needs to speak JS, they could be decoupled via some events. Yes, I get it, events would be async, they would need to be standardized, so the JS code can talk to various layout engines transparently. Oh well...
    The problem with JS is that the majority of the engine complexity is owed to the language itself, not the APIs. Those are actually the easiest bits to implement. The problem is Javascript builds classes via prototyping - the ability to modify an existing class during runtime - and everything is weakly typed. These two aspects combined makes writing compilers a living hell, and even if you want the bare basics it's the core nature of the language itself that's a pain in the ass... It's why V8 has such a radical JIT compilation pipeline with like a dozen internal sub-compilers. You could make a minimal engine that runs interpreted, but the performance would be so atrocious it would make no sense to use.

    When it comes to APIs, there's also just not a lot you can leave out, you'd be shocked how many will affect the layout of a site. Even leaving out something as mundane as the History API can break sites using it to detect templates to be loaded, and I've seen portfolio sites where the layout responds to a background video, and even Youtube runs full-on Shaders for the ambience effect. There's just not really a minimal implementation that wouldn't break a huge number of websites, especially the juggernauts, which would make it pointless for testing because complex sites are what you need to test the most. About the best you can do is drop APIs that require explicit permissions and just have them always return denied, but at that point you've already written 95% of a full ECMA 6 implementation.

    It seems silly that something so atrociously complex is part of a relatively simple markup standard, but that's the messy evolution of the internet fer ya.
    Last edited by Kver; 16 April 2024, 11:46 AM.

    Leave a comment:

Working...
X