Announcement

Collapse
No announcement yet.

A Quick Benchmark Of Mozilla Firefox With WebRender Beta vs. Chrome

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Monstieur
    replied
    However in real-world usage Chrome (and Opera) stutters and scroll lags on complex pages while Firefox, Safari, and Edge are butter smooth. It doesn't matter If you have an RTX 2080 Ti and a 9900K at 5.0 GHz.
    Last edited by Monstieur; 30 October 2018, 04:36 AM.

    Leave a comment:


  • Weasel
    replied
    Originally posted by LinAGKar View Post
    I'm pretty sure this idea that competent people never makes any mistakes leads to a lot of bugs.
    I never claimed they don't make bugs. Besides, these kind of bugs need to have assertions for debug builds, not runtime checks on release builds.

    If your software is truly critical (e.g. security library) then get proper audits.

    xkcd is always relevant.

    Yes, on average software devs are much more incompetent than any other industry -- and that is because they are sheltered by idiots who bring up "best practices" and don't do proper audits and so on. People learn from mistakes. Shelter them with stupid failsafe automatic language checks, and they'll never learn.

    Originally posted by LinAGKar View Post
    Because people don't use signed integers in C.
    No, but it's the C++ committee members who spread that mantra, hence the "best practice" part (they also regret size_t being unsigned, which is a C thing so... yeah... at least C was designed sanely).

    Leave a comment:


  • LinAGKar
    replied
    Originally posted by Weasel View Post
    "Best practise C++" is what I called incompetent++, only for incompetent morons who need "runtime safety checks" because they're too incompetent so they make everyone's CPUs (users) process more crap and waste more energy globally.
    I'm pretty sure this idea that competent people never makes any mistakes leads to a lot of bugs.

    Originally posted by Weasel View Post
    And no, most best practices of C++ these days are far from zero overhead. Bounds checks is just one of them. Some also tell you to use signed int everywhere, which generates pathetic code when trying to divide by constants compared to unsigned (even when you divide by power of 2!), unless compiler can prove that the number is strictly positive. Tell me more about "zero overhead" dreamland.
    Because people don't use signed integers in C.
    Last edited by LinAGKar; 29 October 2018, 04:51 AM.

    Leave a comment:


  • Guest
    Guest replied
    Originally posted by Weasel View Post
    Yeah because "inlining" has so much to do with the language (front-end) rather than the back-end of a compiler. Probably Rust can also optimize instruction scheduling better, cause that makes sense. Ask any Rust fanboy and he'll tell you Rust can make you a sandwich too.

    Rust's checks are NOT only at compile time, because any C/C++ compiler worth its salt today will WARN you about compile-time out-of-bounds accesses that it can prove (at compile time) are out of bounds. Derp.
    Whatever, that doesn't change the fact that Rust is comparable in speed to C/C++ and sometimes faster. I'm not asking you to take my word, just do yourself a favor and google some benchmarks like this one or this one.

    Leave a comment:


  • Weasel
    replied
    Originally posted by carewolf View Post
    What on earth are you blathering about? Best pratices with C++ are always zero overhead, boundary checks are enforced via typing at compile time and only checked on runtime of parsed from user input.
    Not all dynamic input is user input. Some of it the programmer already knows because another component or whatever supplies that information. That's what assertions are for (only in debug builds) and it's why they should NOT be in the release build that gets distributed.

    And no, most best practices of C++ these days are far from zero overhead. Bounds checks is just one of them. Some also tell you to use signed int everywhere, which generates pathetic code when trying to divide by constants compared to unsigned (even when you divide by power of 2!), unless compiler can prove that the number is strictly positive. Tell me more about "zero overhead" dreamland.

    Leave a comment:


  • Weasel
    replied
    Originally posted by msotirov View Post
    That doesn't add almost any overhead. Most of Rust's checks are at compile time, not runtime. Rust also does some crazy inlining optimizations at compile time which you can only dream of in C/C++.
    Yeah because "inlining" has so much to do with the language (front-end) rather than the back-end of a compiler. Probably Rust can also optimize instruction scheduling better, cause that makes sense. Ask any Rust fanboy and he'll tell you Rust can make you a sandwich too.

    Rust's checks are NOT only at compile time, because any C/C++ compiler worth its salt today will WARN you about compile-time out-of-bounds accesses that it can prove (at compile time) are out of bounds. Derp.

    Leave a comment:


  • V1tol
    replied
    I think this benchmark is incorrect. Firstly (as already mentioned) is that SVGs and some effects are not accelerated. Secondly, I see on the third screenshot of the article this:
    WEBRENDER_QUALIFIED - blocked by env: No qualified hardware
    So I am assuming that WebRender was not really enabled and that's why we got the same results as with the old renderer.

    Leave a comment:


  • shmerl
    replied
    Originally posted by johnp117 View Post
    That page isn't very reassuring:

    On the contrary, it means they discovered bugs in Vulkan drivers that others didn't yet. So they can be fixed now.

    However I don't see any Mesa bugs for radv or anv in regards to Skia, which means they either didn't find any or didn't test it.

    Leave a comment:


  • carewolf
    replied
    Originally posted by Weasel View Post
    C++ best practices is not real C++. Real C++ is just C with extra features (even the name says it all). You code with C mindset but with extra features. That's real C++.

    "Best practise C++" is what I called incompetent++, only for incompetent morons who need "runtime safety checks" because they're too incompetent so they make everyone's CPUs (users) process more crap and waste more energy globally.
    What on earth are you blathering about? Best pratices with C++ are always zero overhead, boundary checks are enforced via typing at compile time and only checked on runtime of parsed from user input.

    Leave a comment:


  • Guest
    Guest replied
    Originally posted by Weasel View Post
    Maybe badly written C/C++. WTF are you smoking thinking that adding EXTRA code (bounds checks) would EVER be faster, it's just logic.
    That doesn't add almost any overhead. Most of Rust's checks are at compile time, not runtime. Rust also does some crazy inlining optimizations at compile time which you can only dream of in C/C++.

    Leave a comment:

Working...
X