Announcement

Collapse
No announcement yet.

The Latest Progress On Rust For The Linux Kernel

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by ultimA View Post

    I said nothing about RefCells...



    I wasn't talking about memory management only, you both are still stuck in your discussion with cl333r. I meant in Rust in general. Allocation failures are a good example as oleid said a little bit earlier, but there are also bounds checking and arithmetic (under-/overflow, downcasting) checks. I know these can be disabled, but then they also don't provide their advantages anymore.
    As I mentioned, bounds checks are often elided. Not just by turning them off, but because the compiler can prove that they are unnecessary. In idiomatic Rust code, runtime bounds checks are actually rare and they mostly happen in the same situations as manual checks in C. Notably they are much rarer than in C++'s std::vector etc. Interestingly, if you actually take care that an index is not out of bounds (as you would in C or C++), then the compiler knows that in the code path that executes the base+offset addressing the index is already valid and doesn't need to perform a runtime check.

    Memory allocation can fail, but that's a killer in pretty much any language. Ok say in Java the program doesn't panic directly but you get an OutOfMemoryError... and then what? It crashes because it's basically the only thing that it can do. Same in C - malloc() would return NULL, and then you either abort(), or you pretend to handle it gracefully but you can't really, because with no more RAM available to the process, it can't do much anyway.

    Downcasting can also fail, but again, it's not more frequent than in other languages. It would fail in pretty much the same cases as dynamic_cast<> would fail in C++. Casting from void* to some type in C obviously never fails, but that's not a good thing at all, except when it's used as a poor man's substitute for generics. In Rust you would obviously never use the "any" type for that.

    Comment


    • Originally posted by ultimA View Post
      It will only become a non-issue when it finally gets addressed and a "fix" is available, not when people admit it is an issue.
      Also, you've got the definition of "concern trolling" completely wrong.
      Well since you stated yourself that you agree with the borrow checker part thats why I thought you might be concern trolling but I can see you were being genuine.

      Originally posted by ultimA View Post
      Because this is where the discussion originally started. It's not my fault you went off-topic for 10+ pages discussing with someone else whether the borrow-checker has runtime checks or not. I don't feel any need to continue that discussion. Do you? Especially since you and I agree on that topic.
      Agreed, its not worth discussing since we are on the same page. I take back my statement about the trolling bit.

      Originally posted by jacob View Post

      Memory allocation can fail, but that's a killer in pretty much any language. Ok say in Java the program doesn't panic directly but you get an OutOfMemoryError... and then what? It crashes because it's basically the only thing that it can do. Same in C - malloc() would return NULL, and then you either abort(), or you pretend to handle it gracefully but you can't really, because with no more RAM available to the process, it can't do much anyway.
      As someone who is a fulltime Java/Scala dev as part of his work I concur with this. For standard userspace applications/libraries, when you get into such a state the most sane thing to do is to panic/crash (if possible you try and log something just before the crash happens). This is why you should almost never catch and suppress OOM in Java/JVM, once you get to that point due to how the GC works nothing is really deterministic anymore i.e. you are in no mans land.

      Of course kernel has different priorities, and at least if we are talking about rustc and the borrow checker there is nothing really fundamental that is enforcing this behavior (unlike languages like Java which have a jVM with a GC and hence are designed completely around having a GC)

      Ultimately the point being made is that Rust, just like C is a low level language it just has different defaults (which are imho the correct defaults) and in some minor cases some adjustments need to be made.
      mdedetrich
      Senior Member
      Last edited by mdedetrich; 13 September 2021, 06:31 AM.

      Comment


      • Wow... that was quite a thread. I'm gone for a day and I come back to something so... something that I'm not sure how to describe it.

        Comment


        • Originally posted by ssokolow View Post
          Wow... that was quite a thread. I'm gone for a day and I come back to something so... something that I'm not sure how to describe it.
          Welcome to the internet, enjoy your stay

          Comment


          • Originally posted by jacob View Post
            Memory allocation can fail, but that's a killer in pretty much any language. Ok say in Java the program doesn't panic directly but you get an OutOfMemoryError... and then what? It crashes because it's basically the only thing that it can do. Same in C - malloc() would return NULL, and then you either abort(), or you pretend to handle it gracefully but you can't really, because with no more RAM available to the process, it can't do much anyway.
            I'm not saying in other languages there is some magic piece of code that you can put in to get more memory and continue with the same operation when an allocation fails. But there is a huge difference in *how* you fail! On the desktop, if you have multiple documents open and cannot open a new one, you'd prefer to simply just give an error message to the user instead of crashing the whole process and maybe even loosing unsaved work. Or in another scenario, the user could continue with another operation that is less memory-hungry, or at least get a chance to save. Or maybe the user can manually try again even with the same operation without losing state, because he closed some external processes after he saw the friendly out-of-memory message. And lastly, in any case, even if the user cannot continue at all, the user-experience alone is still a lot nicer if the application handles the failure itself rather than just blindly crashing. This is all possible, because just because you failed to allocate 100Kb of memory doesn't mean you won't be able to allocate a another KB or so. And then there is a large class of safety-critical applications where even when you think you probably cannot continue, you must still try anyway.

            All I'm saying, the notion of "wanting to crash by default" is actually a very poor choice for any application (user-land or kernel, embedded or desktop) that wants to handle failures itself, and there is plenty of reasons to do so. And I'm also saying this as a full-time developer. Wanting to handle failures yourself is the good coding practice, not blindly crashing. If it turns out you don't have enough memory to even handle the failure gracefully, you'll just crash anyway so you're not any worse than Rust's default, but in all likelihood you'll success by being graceful. Not being graceful and simply crashing might be the better choice if the only alternative is not having any error checks done by the programmer at all, but that is only for poorly written programs, and I'm sure that's not Rust's targeted use-case.

            Rust recognized these problems and has committed to introducing fallible allocations, but it is still not available after like 4 years, and when it finally becomes stabilized, it will still take a long time until it trickles down to most crates, if at all. Until then apps have to rely on custom versions of crates and standard library functions to solve this in any sane way, which is insane.
            ultimA
            Senior Member
            Last edited by ultimA; 13 September 2021, 07:15 AM.

            Comment


            • Originally posted by ultimA View Post

              I'm not saying in other languages there is some magic piece of code that you can put in to get more memory and continue with the same operation when an allocation fails. But there is a huge difference in *how* you fail! On the desktop, if you have multiple documents open and cannot open a new one, you'd prefer to simply just give an error message to the user instead of crashing the whole process and maybe even loosing unsaved work. Or in another scenario, the user could continue with another operation that is less memory-hungry, or at least get a chance to save. Or maybe the user can manually try again even with the same operation without losing state, because he closed some external processes after he saw the friendly out-of-memory message. And lastly, in any case, even if the user cannot continue at all, the user-experience alone is still a lot nicer if the application handles the failure itself rather than just blindly crashing. This is all possible, because just because you failed to allocate 100Kb of memory doesn't mean you won't be able to allocate a another KB or so. And then there is a large class of safety-critical applications where even when you think you probably cannot continue, you must still try anyway.

              All I'm saying, the notion of "wanting to crash by default" is actually a very poor choice for any application (user-land or kernel, embedded or desktop) that wants to handle failures itself, and there is plenty of reasons to do so. Wanting to handle failures yourself is the good coding practice, not blindly crashing. Simply crashing might be the better choice if the only alternative is not having any error checks done by the programmer at all, but that is only for poorly written programs, and I'm sure that's not Rust's targeted use-case.

              Rust recognized these problems and has committed to introducing fallible allocations, but it is still not available after like 4 years, and when it finally becomes stabilized, it will still take a long time until it trickles down to most crates, if at all. Until then apps have to rely on custom versions of crates and standard library functions to solve this in any sane way, which is insane.
              That's theory. In practice, if you run out of memory in a desktop app you won't be able to display a warning message because you wouldn't be able to allocate the (large and numerous) data structures necessary to display a pop-up window with widgets in it. You won't be able to allocate a buffer to serialise data in it in order to save them either. That's the point: when you think about it, once you can't allocate on the heap, there is very little you can actually do in a normal program. Especially on the desktop, you won't even be able to process UI events any more (because they involve allocation!). C++ is actually even worse in that regard, because by default it will try to throw bad_alloc, but that involves allocating an exception on the heap... and if you haven't taken special precautions to keep some extra memory for that, you can easily imagine what happens.

              Now of course there are scenarios in which an out-of-memory condition can and should be handled, and Rust works there just fine because you can implement your own custom allocator just like you can in any system language. But with the default one, aborting with a clear message is the only meaningful out-of-the box option in any language.

              Comment


              • Originally posted by jacob View Post

                That's theory. In practice, if you run out of memory in a desktop app you won't be able to display a warning message because you wouldn't be able to allocate the (large and numerous) data structures necessary to display a pop-up window with widgets in it. You won't be able to allocate a buffer to serialise data in it in order to save them either. That's the point: when you think about it, once you can't allocate on the heap, there is very little you can actually do in a normal program. Especially on the desktop, you won't even be able to process UI events any more (because they involve allocation!). C++ is actually even worse in that regard, because by default it will try to throw bad_alloc, but that involves allocating an exception on the heap... and if you haven't taken special precautions to keep some extra memory for that, you can easily imagine what happens.

                Now of course there are scenarios in which an out-of-memory condition can and should be handled, and Rust works there just fine because you can implement your own custom allocator just like you can in any system language. But with the default one, aborting with a clear message is the only meaningful out-of-the box option in any language.
                Your points only apply if the system memory is filled to the brim. Which is rare. As I said, "just because you failed to allocate 100Kb of memory doesn't mean you won't be able to allocate another KB or so". In my experience, most times even when an allocation fails, you can still do other allocations as long as they are smaller. Saving a document to disk doesn't necessarily have to be memory-intensive, since often you can just stream bytes to the disk. Even stuff associated with saving such as compression is usually done in chunks so you don't need to allocate enough space for a whole copy. Similarly, displaying a UI element about the error is no problem as it is not usually a memory-hungry operation (especially if you already have some other UI on screen). Same thing with std::bad_alloc. Yes it allocates memory, but it only requires a couple of bytes, which is almost never a problem in practice. BTW, C++ also has a standard non-throwing new, though people rarely use it, exactly because the allocation of a bad_alloc is actually a non-issue (when people choose to use new(nothrow), that is usually about a requirement to avoid exceptions altogether rather than saving memory).
                ultimA
                Senior Member
                Last edited by ultimA; 13 September 2021, 07:58 AM.

                Comment


                • Originally posted by jacob View Post
                  Now of course there are scenarios in which an out-of-memory condition can and should be handled, and Rust works there just fine because you can implement your own custom allocator just like you can in any system language.
                  And what do you do about crates that don't use your allocator? Also, unless you absolutely really need a custom memory management for performance, rolling your own allocator is not what devs are interested in or usually do. With the "roll-your-own argument" you could also roll your own arithmetic or array-access functions in C that are safe. Sounds silly, right? It is. Just as silly as rolling your own allocator and modifying all your dependencies with it just to avoid unnecessary crashes that would have been easily avoidable otherwise. It's not about what you can do, it's about reasonable defaults and ease of coding. And I criticize that Rust's defaults for handling memory allocation are not good.
                  ultimA
                  Senior Member
                  Last edited by ultimA; 13 September 2021, 08:09 AM.

                  Comment


                  • Originally posted by ultimA View Post
                    And what do you do about crates that don't use your allocator? Also, unless you absolutely really need a custom memory management for performance, rolling your own allocator is not what devs are interested in or usually do. With the "roll-your-own argument" you could also roll your own arithmetic or array-access functions in C that are safe. Sounds silly, right? It is. Just as silly as rolling your own allocator and modifying all your dependencies with it just to avoid unnecessary crashes that would have been easily avoidable otherwise. It's not about what you can do, it's about reasonable defaults and ease of coding. And I criticize that Rust's defaults for handling memory allocation are not good.
                    Currently, you can specify a custom global allocator. Then that gets used.
                    A few months back, https://github.com/rust-lang/rust/pull/84266 was merged. It gives you an (as of today still unstable feature) which forbids global oom-handling. Enabling this, those crates would probably not compile.

                    Here is also an RFC on fail-able allocation: https://rust-lang.github.io/rfcs/211...-me-maybe.html

                    Apart from that, you can have a look at the following roadmap:
                    https://github.com/rust-lang/wg-allocators/issues/48
                    oleid
                    Senior Member
                    Last edited by oleid; 13 September 2021, 08:26 AM.

                    Comment


                    • Originally posted by ultimA View Post
                      All I'm saying, the notion of "wanting to crash by default" is actually a very poor choice for any application (user-land or kernel, embedded or desktop) that wants to handle failures itself, and there is plenty of reasons to do so. And I'm also saying this as a full-time developer. Wanting to handle failures yourself is the good coding practice, not blindly crashing. If it turns out you don't have enough memory to even handle the failure gracefully, you'll just crash anyway so you're not any worse than Rust's default, but in all likelihood you'll success by being graceful. Not being graceful and simply crashing might be the better choice if the only alternative is not having any error checks done by the programmer at all, but that is only for poorly written programs, and I'm sure that's not Rust's targeted use-case.
                      On various platforms (Linux included), you don't really have any choice, because the kernel defaults to overcommit semantics to compensate for applications that waste a ton of memory on Windows by mapping much more memory than they actually write to.

                      That means that, even if you're handling malloc failure perfectly well, malloc will return success but, when you try to write to that memory, the kernel may suddenly realize it's overpromised what memory is available and have to kill something after all the branches for handling allocation failure in the software have already been told it was successful.

                      Comment

                      Working...
                      X