There's a lot of false statements in here.
Memory safety is purely a compile-time concept. It prevents you from creating multiple aliases of a mutable reference (aliasing XOR mutability). Thread safety is also guaranteed through trait markers which are a compile-time concept that simply generates a compilation error if a !Send type is moved across thread boundaries or a !Sync type is being shared across threads.
Pointers are just integers. It's the buffers that some integers point to which matters, and It is trivial to reuse a buffered type's memory. All buffered types have clear methods which allow you to reuse a buffer without reallocation. You can also take advantage of mem::swap to swap values, which is very useful to get temporary ownership of a mutable reference's value. And there's a number of approaches that you can use to create a memory pool or arena for reusing short-lived buffers that can be returned to the pool when no longer in use. Even when speaking about a boxed type, you can easily reset the state the same as any other variable on the stack.
To be fair, if you are not interested in what zero-cost means to Rust then it's not fair to criticize it. When official documentation declares something as being zero-cost, what it means is that the syntax and abstractions used to achieve a desired result are as efficient as if you were to manually write the code from scratch without those abstractions.
One such example is zero-sized types, which are types that only exist at compile-time and therefore consume no memory. Similar to this is how a newtype purely exists only as a language concept and is no different from the type it wrapped after compilation. Another example is how an Option<T> where T supports null-pointer optimization will result in an Option<T> that consumes the same amount of memory as T. Likewise an enum will typically consume the same amount of memory as its largest possible variant (though this at one time was not implemented). Or how use of the iterator trait and chaining iterator adapters together in a functional way produces a very complex type with many layers of type abstractions, which is reduced at compile time effectively into machine code that's no different from finely-optimized assembly written in traditional imperative fashion by hand.
Originally posted by cb88
Originally posted by Anux
View Post
Originally posted by xfcemint
One such example is zero-sized types, which are types that only exist at compile-time and therefore consume no memory. Similar to this is how a newtype purely exists only as a language concept and is no different from the type it wrapped after compilation. Another example is how an Option<T> where T supports null-pointer optimization will result in an Option<T> that consumes the same amount of memory as T. Likewise an enum will typically consume the same amount of memory as its largest possible variant (though this at one time was not implemented). Or how use of the iterator trait and chaining iterator adapters together in a functional way produces a very complex type with many layers of type abstractions, which is reduced at compile time effectively into machine code that's no different from finely-optimized assembly written in traditional imperative fashion by hand.
Comment