Announcement

Collapse
No announcement yet.

Write XOR Execute JIT Support Lands For Mozilla Firefox

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Write XOR Execute JIT Support Lands For Mozilla Firefox

    Phoronix: Write XOR Execute JIT Support Lands For Mozilla Firefox

    As another recent Firefox Nightly change besides enabling WebGL 2 by default is that Firefox's just-in-time compiler supports W^X protection...

    http://www.phoronix.com/scan.php?pag...-XOR-E-Firefox

  • unixfan2001
    replied
    Originally posted by pal666 View Post
    this w^x thing was named by some uneducated kid who dind't know that 0^0=0
    Lolwhat? Pray tell why exactly you think it's wrong and what apt name you would've chosen.

    SystemCrasher
    Curious. Since you seem to think Mozilla is the "sell out" and is "caling home", what sort of browser are you using in general? Lynx?
    It certainly can't be Chrome, as that one has a much less re-assuring track record.

    Never mind that Firefox is probably the most configurable browser if you know what you're doing.

    Leave a comment:


  • nanonyme
    replied
    Originally posted by pal666 View Post
    this w^x thing was named by some uneducated kid who dind't know that 0^0=0

    it's easy to fix - it shouldn't have much per-instance data
    It was most likely named by someone not ignorant of what ^ means in C.

    Leave a comment:


  • nasyt
    replied
    The bad thing about BSD is, that it makes Linux look fast. And Linux is not fast at all!

    Leave a comment:


  • nasyt
    replied
    Originally posted by endman View Post
    We should ban BSD.
    If BSD gets banned, its affiliates will simply join MINIX.

    That would be great. A bunch of foes, who oppose the GPL and monolithic Kernels.

    Leave a comment:


  • pal666
    replied
    Originally posted by bug77 View Post
    Probably, but I'm not sure what "much" means when for every given page there are at least a dozen sites running scripts in your browser.
    those are per-page data, you need them whether you have one instance or several
    Originally posted by bug77 View Post
    The idea is that it only compiles functionX() for the first tab opened and for subsequent tabs that use the same function, it can use the already compiled code.
    that is what i called cache
    Originally posted by bug77 View Post
    No, the idea is that looking at the data globally can enable optimizations that are not apparent when looking only at small pieces. Something akin to how how you can get better compression if you have the memory to use a larger dictionary.
    no, just as i said you can optimize for certain workload. if your instances share workload, profile from one is enough. if they have different workloads, you need separate profiles and separate optimizations, otherwise it will produce optimization which is slow for everyone
    Originally posted by bug77 View Post
    PPS pal666 You should really look into quoting messages
    i didn't find way to quote multiple messages on this forum without manual insertion of quote tags

    actually, the real problem here is not how many instances to use, it is that browser can't use all available processor cores. spawning several instances is one way to solve it, but there are others
    Last edited by pal666; 01-14-2016, 08:56 AM.

    Leave a comment:


  • rmiller
    replied
    Originally posted by rmiller View Post
    What!?
    ROP exploit have become commonplace because of W^X and similar techniques. ROP exploits are far harder to write. And if the writable memory is executable you can potentially workaround almost every other mitigation technique: RelRO, randomic mmap, randomic malloc, PIE, Stack Smashing Protector, Stack Ghost, etc.
    I forgot to mention that w^x is also a correctness effort. Memory should not be simultaneously writable and executable.

    Leave a comment:


  • SystemCrasher
    replied
    Originally posted by bug77 View Post
    Can you name the browser that broke IE's stranglehold on the www?
    I've been a big fan of Firefox 1.0 and filed 'em like a 100 bugs, or so. I've took a big care to ensure web looks correctly in Mozilla and installed it everywhere I can. They fixed bugs like mad and whole world united to show IE how to do it right. Beating IE succeeded. But these epic times are long over. Now it is entirely different team of some random greedy fucks, not team I've respected. They lost their old goals and only care about squeezing money from partners, selling userbase, spying as much as they can (browser got like 30 different URLs for 30 various reasons to "phone home"), "ecosystem" shit aka locking everything down and killing off settings which could inerfere with shameless advertisement. So they are much less about techs and more about marketing BS and treachery. At the end of day, it seems to turn into something similar to IE, where you can't expect privacy, good program behavior, and firendly team. You can rather expect treachery, utter ignorance of bugs, if it does not leads to profit, technically incompetent solutions, marketing BS, adware, spyware, and so on.

    Seems "mammon" has infected the "beast", beast gone insane and attempts poisonous bites on us. I fail to see other options but to grab blaster, fend off insane creature and try to assemble super-robots, which would learn this thing to behave. Uhm, I refer to about:mozilla thing of course. Sorry, but beast gone aggressive, rampant, unfriendly and just seeks for blood & meat.

    Mozilla was the master of the game.
    Unfortunately it WAS master of the game. But no longer IS. That's a problem.

    Leave a comment:


  • bug77
    replied
    Originally posted by Michael_S View Post

    Right. To be clear, from what I understand most Javascript engines have multiple pass compilation of Javascript. One pass to parse it and turn it into slow interpreted code, a second pass to turn it into generically optimized machine code, and if the function is run enough it's profiled and then the optimized code is revised.

    ...someone who understands all of this better than me can probably correct my mistakes here.

    Sharing the profiling step between browser tabs is nonsensical because functionX() in tab 1 may be called from a very different context in tab 2 and yet another different context in tab 3, so using profiled optimization for one may cause slowdowns in the others. So in that sense, pal666 is correct.

    But the basic parsing to interpreted code and generic first pass conversion to machine code could probably be shared, right? The hard part is timing - if tab 1 and tab 2 are loading the same Javascript code at almost the exact same time, is it more efficient for the slower tab to wait for the Javascript interpreter in the faster tab to finish and grab a copy of the result, or just run its own compilation?
    I think you're mostly spot-on. I don't think the optimization of the code is done precisely on a second pass, but rather when JIT decides that code is executed "often enough". But that's just me nitpicking.

    To add to what you just said, if functionX() appears in 10 tabs opened on Facebook (or whatever site), chances are it's always used in the same context always. However, if we're talking about some function in jQuery, that can and will be used in wildly different contexts. I think JIT can handle this, too, but I'm not sure how it is done.

    As for your last paragraph, the answer is quite simple: if optimized code is not available, the browser will interpret the JavaScript. The native code is available, it will use that instead. No tab has to wait for anything,

    Leave a comment:


  • Michael_S
    replied
    Originally posted by bug77 View Post
    The idea is that it only compiles functionX() for the first tab opened and for subsequent tabs that use the same function, it can use the already compiled code.
    Right. To be clear, from what I understand most Javascript engines have multiple pass compilation of Javascript. One pass to parse it and turn it into slow interpreted code, a second pass to turn it into generically optimized machine code, and if the function is run enough it's profiled and then the optimized code is revised.

    ...someone who understands all of this better than me can probably correct my mistakes here.

    Sharing the profiling step between browser tabs is nonsensical because functionX() in tab 1 may be called from a very different context in tab 2 and yet another different context in tab 3, so using profiled optimization for one may cause slowdowns in the others. So in that sense, pal666 is correct.

    But the basic parsing to interpreted code and generic first pass conversion to machine code could probably be shared, right? The hard part is timing - if tab 1 and tab 2 are loading the same Javascript code at almost the exact same time, is it more efficient for the slower tab to wait for the Javascript interpreter in the faster tab to finish and grab a copy of the result, or just run its own compilation?

    Leave a comment:

Working...
X