Announcement

Collapse
No announcement yet.

A Quick Benchmark Of Mozilla Firefox With WebRender Beta vs. Chrome

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • ZenoArrow
    replied
    Originally posted by Weasel View Post
    No. Why not? Because that's not an inherent part of the language, it's a CPU thing. Most people are clueless about low level stuff, but that doesn't make something difficult or hard. Low skilled required doesn't mean "easy". Easy means that it's difficult even with proper programming skills (as most real things are).

    There's no need to add a language feature to something that should be part of APIs anyway. If you want portability/cross platform just use a portable multi-threading library. Putting it in the language is stupid because then all OSes will have to support it, and OSes can be different. It might shock you, but POSIX is not God, despite what some fanboys think. (yes I *am* aware C added it to the language, doesn't mean I'm for it though)

    As for design: you just need to understand how shared resources are accessed at the low level. It's part of the design process, really, nothing to do with language. Designing, e.g. thread-safe APIs has nothing to do with language either.

    The thing is, most people can't even fathom releasing resources manually, that's exactly the low skilled trash that almost all language promote. They find it "difficult" because they are UNSKILLED due to using crappy languages that sheltered them from learning proper skills (manual resource management). Manual resource management teaches you to think about it during the design phase as well. You think when resources are acquired and when they are locked so you can design a proper interface.

    Note that memory, despite being the most popular from newbies, is just one type of resource and not that special (in fact it's much more trivial than other resources), but garbage collectors spoiled newbies and they'll forever remain newbies because of it.
    Three points:

    1. "Because that's not an inherent part of the language". The way Rust handles memory ownership is a core part of the language. All memory allocations have a single owner, and access by other functions requires either borrowing, cloning or transferring ownership. This design decision can make it easier to reason about the design of applications that span multiple CPU cores, as you have to be explicit about when data can be read and updated, and potential clashes are caught at compile time. Can you do the same with C++? Sure, but the compiler doesn't necessarily assist you in debugging this type of code. You can achieve similar results with static analysis tools though. Note that my argument was only about the "ease" of programming for multiple cores, and it's being able to have the compiler give you additional feedback about your design that makes the difference here.

    If you're interested in getting a better sense of Rust's approach to memory management, I can recommend this podcast:

    https://corecursive.com/016-moves-an...th-jim-blandy/

    2. Multicore and multithreaded are not quite the same thing. As I'm sure you're already aware, multithreaded applications can run on a single core, and unless you design your code to run on multiple cores it won't necessarily make use of multiple cores (for example, you would have to set the CPU affinity when deciding which core to run a thread on). In general, multicore increases the potential complexity of multithreaded applications (if you want your applications to make fuller use of the available CPU resources).

    3. Memory management is not hard. I've programmed in 6502 assembly before, I found it to be one of the easiest languages to get up and running in, though in my opinion its greatest strength is also its greatest weakness (being explicit about everything you do makes it easy to understand, but also tedious to write out). If you think that programmers who are used to higher level languages are somehow corrupted and incapable of learning lower level memory management techniques I'd suggest you're mistaken, for the most part they're just focused on a different set of problems in their day-to-day work.

    Leave a comment:


  • Weasel
    replied
    Originally posted by ZenoArrow View Post
    Weasel, do you at least accept that Rust makes it easier to design software that makes better use of multicore CPUs? If not, why not?
    No. Why not? Because that's not an inherent part of the language, it's a CPU thing. Most people are clueless about low level stuff, but that doesn't make something difficult or hard. Low skilled required doesn't mean "easy". Easy means that it's difficult even with proper programming skills (as most real things are).

    There's no need to add a language feature to something that should be part of APIs anyway. If you want portability/cross platform just use a portable multi-threading library. Putting it in the language is stupid because then all OSes will have to support it, and OSes can be different. It might shock you, but POSIX is not God, despite what some fanboys think. (yes I *am* aware C added it to the language, doesn't mean I'm for it though)

    As for design: you just need to understand how shared resources are accessed at the low level. It's part of the design process, really, nothing to do with language. Designing, e.g. thread-safe APIs has nothing to do with language either.

    The thing is, most people can't even fathom releasing resources manually, that's exactly the low skilled trash that almost all language promote. They find it "difficult" because they are UNSKILLED due to using crappy languages that sheltered them from learning proper skills (manual resource management). Manual resource management teaches you to think about it during the design phase as well. You think when resources are acquired and when they are locked so you can design a proper interface.

    Note that memory, despite being the most popular from newbies, is just one type of resource and not that special (in fact it's much more trivial than other resources), but garbage collectors spoiled newbies and they'll forever remain newbies because of it.
    Last edited by Weasel; 11 November 2018, 08:39 AM.

    Leave a comment:


  • name99
    replied
    Originally posted by eydee View Post
    In a real life scenario, the difference is a page loading in 0.1 or 0.2 seconds. Not really something a human being can perceive. It's the same crap as operating systems competing on boot time. Your OS booting in 6 or 7 seconds changes absolutely nothing in your life.
    Boy are you reading the wrong web site...

    Leave a comment:


  • ZenoArrow
    replied
    Weasel, do you at least accept that Rust makes it easier to design software that makes better use of multicore CPUs? If not, why not?

    Leave a comment:


  • Weasel
    replied
    Originally posted by Michael_S View Post
    And if you keep reading about Rust, you'll see that the preferred/recommended way to do array iteration in Rust is using a for construct, https://doc.rust-lang.org/std/primitive.array.html
    let mut array: [i32; 3] = [0; 3]; for x in &array { print!("{} ", x); } and in that particular case, the bounds check is eliminated. ...which, you know, might explain why Rust is matching or beating C++ in a mess of benchmarks.
    Yeah, but I wasn't talking about iterating through an array only. And since Rust doesn't have pointers without unsafe blocks...

    Originally posted by Michael_S View Post
    Edit: I apologize, I have been unkind. But I don't understand your belief that Rust is inherently slow. The people that built the language set as their goal matching C++ runtime performance and memory efficiency while guarding against certain classes of errors. Yes, if you use a C-family style for (int i = 0; i < ... ; i++) {} loop then Rust by default puts in a bounds check and it will slow you down. Their solution was to recommend iterator-style iteration. All available evidence is that they have succeeded on their performance goals.
    The point is: any language that adds checks behind your back is stupid. I'm sure Mozilla made Rust to match C++, but the problem is that it was their "idiomatic" C++ which is FULL of bounds checking so they assume all C++ code is, but that's simply not true.

    C++ (the language, not libraries) does not add checks behind your back for basic stuff -- yeah, you do get those for standard library containers but those are not part of the language proper: they're just classes that you can also implement yourself and avoid the runtime checks if you want to, without "damaging" the code clarity, since it will look the same. Not so much for Rust (except using unsafe blocks) since they are baked into the language itself.

    This is why I have a dislike for any language (not container) that does runtime checks on bounds, or has integrated garbage collector in the LANGUAGE instead of having it as a library, and so on.

    Leave a comment:


  • Michael_S
    replied
    Originally posted by Weasel View Post
    I think it's funny that you clearly imply you know more about Rust (I admit I'm not interested in it much) and you don't know it does automatic bounds checking at runtime without unsafe blocks? Seriously dude?

    First hit on Google: https://til.hashrocket.com/posts/9f3...ked-at-runtime

    At least learn the stuff you preach so much properly.
    And if you keep reading about Rust, you'll see that the preferred/recommended way to do array iteration in Rust is using a for construct, https://doc.rust-lang.org/std/primitive.array.html
    let mut array: [i32; 3] = [0; 3]; for x in &array { print!("{} ", x); } and in that particular case, the bounds check is eliminated. ...which, you know, might explain why Rust is matching or beating C++ in a mess of benchmarks.

    Edit: I apologize, I have been unkind. But I don't understand your belief that Rust is inherently slow. The people that built the language set as their goal matching C++ runtime performance and memory efficiency while guarding against certain classes of errors. Yes, if you use a C-family style for (int i = 0; i < ... ; i++) {} loop then Rust by default puts in a bounds check and it will slow you down. Their solution was to recommend iterator-style iteration. All available evidence is that they have succeeded on their performance goals.
    Last edited by Michael_S; 30 October 2018, 10:43 AM.

    Leave a comment:


  • Weasel
    replied
    Originally posted by Michael_S View Post
    You don't understand Rust. Most of what Rust does is compiler-done compile time ownership checking of references. So the exact same things C++ is doing for you with unique_ptr and shared_ptr, except instead of the developer needing to remember to use unique_ptr and shared_ptr the compiler is automatically doing it for you. The exact same zero level runtime overhead results.

    That's what Rust provides. Not automatic runtime bounds checking. And that's why Rust literally matches C++ for performance and that's why the people at Mozilla that designed Rust were trying to make a memory-safe C++ replacement with no performance trade off.

    See also ATS, which does the same kind of thing. It's a compiler-only layer on top of raw C, with nothing added to runtime. It just provides compile time guarantees against memory leaks and buffer overruns and use-after-free errors.
    I think it's funny that you clearly imply you know more about Rust (I admit I'm not interested in it much) and you don't know it does automatic bounds checking at runtime without unsafe blocks? Seriously dude?

    First hit on Google: https://til.hashrocket.com/posts/9f3...ked-at-runtime

    At least learn the stuff you preach so much properly.

    Leave a comment:


  • Monstieur
    replied
    I get the same MotionMark score with WebRender on and off, however the framerate is visibly higher with WebRender on.

    Leave a comment:


  • Michael_S
    replied
    Originally posted by Monstieur View Post
    I forgot this was on Linux. I was talking about Windows, where Firefox and Edge visibly trounce Chrome & Opera in scrolling smoothness, input latency, UI perfomance (UI animations seem to be capped at 60 fps in both Chrome & Opera). This is especially visible on 144+ Hz monitors. The same holds true for Firefox and Safari on macOS as well compared to Chrome & Opera.
    I will take your word on it. No sarcasm. I use Linux everywhere except a mediocre Windows laptop for work, and on that one I never open Chrome so I don't have a comparison.

    Originally posted by Monstieur View Post
    Java / Android games go out of their way to avoid triggering garbage collection and inducing stutters. However regular application developers don't do this and the runtime triggers GC as usual, resulting in freezes when performing continuous UI operations. I use Eclipse, Android Studio and some other Java tools on both macOS and Windows, and the UI performance is terrible compared to Xcode and native editors on Windows (the WPF based Vistual Studio 2008+ are not among them). I mean they are utterly crushed by native applications that run at 240 fps on my 240 Hz monitor. The Java applications slow down to 10 fps under load.
    I don't think IDEs are a great example of what programming languages to use for desktop applications just because they're so inherently complicated. But aside from Minecraft and the Minecraft clone Terasology (which also gets very high fps for me) I don't use any other Java desktop applications besides Eclipse. So I don't really have grounds to compare. I also don't have any displays faster than 60 Hz, so to be fair to you I am not in a position to notice a difference.

    Leave a comment:


  • Michael_S
    replied
    Originally posted by Weasel View Post
    Because C++ has zero overhead when used properly? And I mean that literally. Not "branch prediction is free on modern CPUs", that's not zero overhead, it adds instructions to the code, so it is some overhead. C++ can literally add zero measurable overhead. So it is literally impossible for Rust to be faster.
    You don't understand Rust. Most of what Rust does is compiler-done compile time ownership checking of references. So the exact same things C++ is doing for you with unique_ptr and shared_ptr, except instead of the developer needing to remember to use unique_ptr and shared_ptr the compiler is automatically doing it for you. The exact same zero level runtime overhead results.

    That's what Rust provides. Not automatic runtime bounds checking. And that's why Rust literally matches C++ for performance and that's why the people at Mozilla that designed Rust were trying to make a memory-safe C++ replacement with no performance trade off.

    See also ATS, which does the same kind of thing. It's a compiler-only layer on top of raw C, with nothing added to runtime. It just provides compile time guarantees against memory leaks and buffer overruns and use-after-free errors.

    Originally posted by Weasel View Post
    Rust does add shitty checks behind your back tho, unless you use unsafe blocks. So, that's a measurable overhead. By measurable, I mean even 1 single extra byte of instruction.

    Obviously, one can write C++ with a million shitty standard library abstractions or Boost or w/e and then QQ that C++ is slower when in fact it's just his shitty use of C++ that's slower and handicapped.
    Then submit code to those benchmarks to take the speed crown back from Rust in the three benchmarks where the best Rust submission beats the best C++ submission. You don't have to start from nothing, you can modify one of the existing solutions.
    Last edited by Michael_S; 29 October 2018, 08:52 PM.

    Leave a comment:

Working...
X