Announcement

Collapse
No announcement yet.

Rust-Written Redox OS 0.8 Released With i686 Support, Audio & Multi-Display Working

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #71
    There's a lot of false statements in here.

    Originally posted by cb88
    The only reason rust uses more memory sometimes is to guarantee safety... you can do unsafe things though and get identical memory usage.
    Memory safety is purely a compile-time concept. It prevents you from creating multiple aliases of a mutable reference (aliasing XOR mutability). Thread safety is also guaranteed through trait markers which are a compile-time concept that simply generates a compilation error if a !Send type is moved across thread boundaries or a !Sync type is being shared across threads.

    Originally posted by Anux View Post
    Yes, while you can be super efficient with (re)using pointers in C, in Rust it almost allways results in a copy of the data. The C version probably has some memory/concurrency bugs while the Rust one doesn't.
    Pointers are just integers. It's the buffers that some integers point to which matters, and It is trivial to reuse a buffered type's memory. All buffered types have clear methods which allow you to reuse a buffer without reallocation. You can also take advantage of mem::swap to swap values, which is very useful to get temporary ownership of a mutable reference's value. And there's a number of approaches that you can use to create a memory pool or arena for reusing short-lived buffers that can be returned to the pool when no longer in use. Even when speaking about a boxed type, you can easily reset the state the same as any other variable on the stack.

    Originally posted by xfcemint
    I have no idea what is the exact definition of "0 cost" in Rust, and I'm not interested in finding out.
    To be fair, if you are not interested in what zero-cost means to Rust then it's not fair to criticize it. When official documentation declares something as being zero-cost, what it means is that the syntax and abstractions used to achieve a desired result are as efficient as if you were to manually write the code from scratch without those abstractions.

    One such example is zero-sized types, which are types that only exist at compile-time and therefore consume no memory. Similar to this is how a newtype purely exists only as a language concept and is no different from the type it wrapped after compilation. Another example is how an Option<T> where T supports null-pointer optimization will result in an Option<T> that consumes the same amount of memory as T. Likewise an enum will typically consume the same amount of memory as its largest possible variant (though this at one time was not implemented). Or how use of the iterator trait and chaining iterator adapters together in a functional way produces a very complex type with many layers of type abstractions, which is reduced at compile time effectively into machine code that's no different from finely-optimized assembly written in traditional imperative fashion by hand.
    Last edited by mmstick; 25 November 2022, 10:12 AM.

    Comment


    • #72
      Originally posted by xfcemint View Post
      (I'm not looking at the benchmark because I'm not currently interested in Rust)
      Then you should not be commenting about Rust here, and you definitely should not spread misinformation about something you don't even understand.

      If the benchmark allows different algorithms (i.e. algorithms have not been intentionally made the same), then it's an apples to oranges comparison.
      If anyone here was making an assumption of a direct comparison, he is incorrect. Whithout taking int account differences in algorithms, any comparisons are irrelevant.
      You aren't qualified to criticize the quality of the language benchmarks that you didn't even bother to look at.

      High-level issues are interesting to me, not Rust in particular.
      So go ahead and go do something other than comment about Rust here.
      No point wasting the time of those who are actually knowledgeable about Rust.
      Last edited by mmstick; 25 November 2022, 10:13 AM.

      Comment


      • #73
        Originally posted by oleid View Post

        That's correct. But to put it into perspective: it's almost always less memory than C++. So it's good enough, I presume.
        Well benchmarks are deep hole imho. They usually have code that is optimized for the tests themselves that rarely if ever is present in real code. I remember Google doing a research concluding that C++ is the fastest language but the source code tweaks they did to achieve this is pretty much never present in real C++ code.
        Didn't check for Rust but since there are many code versions of Rust for each/some tests I imagine that's exactly what's going on. And since C++ is basically a superset of C one can "optimize" the C++ and easily win.

        Comment


        • #74
          Originally posted by mmstick View Post

          https://benchmarksgame-team.pages.de...test/rust.html

          That is not necessarily true. In many of the benchmarks Rust is equivalent to C, and sometimes even uses less memory and CPU. Keep in mind that memory consumption has more to do with the algorithms used than the language itself. Rust's memory safety does not play any role in the amount of memory used.

          Also, the kind of code submitted to a competitive benchmark competition is very different from the kind of code in a practical real world project. There's a lot of real world scenarios where it's easy to write an efficient algorithm in Rust that would otherwise be too dangerous to attempt in C while still having some resemblance of reliability. It also really helps out a lot that Rust's de facto string type is UTF-8 rather than null-terminated, which makes iterating and splitting possible without reallocations.
          I didn't say that every time that's the case, I said almost always and your link proves exactly that (I look at the first occurrences of C and Rust, not at every single C and Rust entry).
          Also the fact that some of these tests have like 8 versions of Rust and 1 of C tells me that the benchmark is heavily biased towards Rust (like the k-nucleotide test). But regardless a benchmark has to have 1 version of each, the one that is most used in real-world source code, and since it's (probably) impossible to determine ... it is what it is.
          Last edited by cl333r; 25 November 2022, 11:09 AM.

          Comment


          • #75
            Originally posted by cl333r View Post

            Well benchmarks are deep hole imho. They usually have code that is optimized for the tests themselves that rarely if ever is present in real code. I remember Google doing a research concluding that C++ is the fastest language but the source code tweaks they did to achieve this is pretty much never present in real C++ code.
            Didn't check for Rust but since there are many code versions of Rust for each/some tests I imagine that's exactly what's going on. And since C++ is basically a superset of C one can "optimize" the C++ and easily win.
            There are two things that are actually easy to benchmark lossless compression(zip and co.) and video playback, you allways do the same thing over and over again looking for the same pattern useing the same code, and most of the time the instruction sets are so small and frequently used that they stay in the cpu cache.

            Comment


            • #76
              Originally posted by mmstick View Post
              Pointers are just integers. It's the buffers that some integers point to which matters, and It is trivial to reuse a buffered type's memory.
              I didn't explain well enough what I meant:
              • If I have a function that adds values I could clone my variable, let 2 threads add values and add the returned values.
              • In C I would simply give the same pointer to both functions and wait till they are done.
              In Rust I had my value memory doubled. And for more complex scenarios I probably run into concurrency problems with my C approach.

              Originally posted by cl333r View Post
              Also the fact that some of these tests have like 8 versions of Rust and 1 of C tells me that the benchmark is heavily biased towards Rust (like the k-nucleotide test).
              That benchmark game existed before rust. Anyone can submit code and chill for whatever language.

              Originally posted by xfcemint View Post
              Technically, in C / C++, you can probably efficiently implement almost any algorithm. The only question, as you have pointed out, is: how hard will it be. But then the original question simply changes from: "which language is faster" into "which language is easier to use".
              Ease of use is always a factor If you want to replicate real world usage. One also shouldn't put too much value in any benchmarks (even on phoronix ). If you know all conditions you get some clues from benchmarks but it almost never gives you true answers for your own use case.

              Comment


              • #77
                Originally posted by mmstick View Post
                There's a lot of false statements in here.
                Memory safety is purely a compile-time concept. It prevents you from creating multiple aliases of a mutable reference (aliasing XOR mutability). Thread safety is also guaranteed through trait markers which are a compile-time concept that simply generates a compilation error if a !Send type is moved across thread boundaries or a !Sync type is being shared across threads.
                Obviously... but a side effect is that it forces slightly higher memory use because some "unsafe" methods cannot be used due to those checks. The fact remains that "purely compile time" isn't realistic... an DOES have affect how the runtime works ... after all that's the whole damn point anyway.

                Comment


                • #78
                  Originally posted by cb88 View Post

                  Obviously... but a side effect is that it forces slightly higher memory use because some "unsafe" methods cannot be used due to those checks. The fact remains that "purely compile time" isn't realistic... an DOES have affect how the runtime works ... after all that's the whole damn point anyway.
                  Rust has unsafe fn you can call inside unsafe blocks which allow for unchecked operation (eg. constructing a String from a buffer of bytes by typecasting without verifying that it's valid UTF-8).

                  They're used to build the safe abstractions, they're "You can't verify this, but I've audited it myself" messages to the compiler, and they're part of Rust's "wrap the tiny bits that need to be unsafe in abstractions which uphold the safety invariants so the rest of the codebase can be safe" philosophy.

                  Comment


                  • #79
                    Originally posted by Rallos Zek View Post

                    Meh... None of the above were better.




                    IA-64 was garbage, from using an VLIW ISA in a General Purpose processor, to being slower and hotter than anything else on the market. IA-64 was a called a failure by many before it was even released.
                    Your revisionist history is showing.

                    Comment


                    • #80
                      Originally posted by xfcemint View Post

                      I'm not an expert on that subject, but in my humble opinion GPL would be a better choice. (compare: Linux vs. BSD).

                      It's never too late to change the course...
                      GPL hinders driver support from manufacturers: just look at NVIDIA and ARM.

                      Comment

                      Working...
                      X