Announcement

Collapse
No announcement yet.

Rust Support In The Linux Kernel Undergoing Another Round Of Discussions

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • jacob
    replied
    Originally posted by WorBlux View Post
    I'm doubtful Rust is going to catch protocol-level bugs like blurtooth or the EMV prococol flaws such as the pin bypass MtM that was recently found. But having I do see the benefit of patching a more organized and safer code space to more quickly issue patches.
    By itself and automagically, of course not. I meant implementation specs, not protocol or algorithm specs, you can always implement a bad algorithm in any language including Rust. What it *can* do though is avoid various implementation flaws of a protocol or algorithm specification using concepts like typestates. Of course in theory it is possible to do that in C too. In practice, doing it in C is 1) annoying and troublesome, 2) error prone and 3) not really reliably enforced by the compiler anyway.

    Originally posted by WorBlux View Post
    And Redox is certainly an interesting project. As noted before, I'm cautiously optimistic about it.
    I agree, I think it's interesting in a number of ways, not just because it's written in Rust.

    Leave a comment:


  • WorBlux
    replied
    Originally posted by jacob View Post
    In other words, I'm not aware of anything that can be proven about C code that cannot be proven more easily in Rust. Of course Valgrind etc. can be, an are, used for Rust as well.
    Fair enough.

    Originally posted by jacob View Post
    I'm not hating C here; for decades it was the only language other than assembly that could actually be used for low level systems programming. But we should beware of the Baby Duck syndrome; the fact that something was done in C for lack of other options doesn't mean that C is somehow ideal for that or that it can't be superseded. I can see a role for C well into the future by the way; while unsafe Rust can be used for extremely low level code (say, a context switch), it's just easier and more straightforward to write it in C. When writing C, one can essentially "see through" and visualise the underlying assembly that the compiler will generate, not so in Rust (at least I can't). But on the other hand, the Linux kernel also handles various highly complex data structures and sophisticated algorithms and those would definitely be better written in Rust.
    I think part of the issue is that many ISA's are specifically intended to map well to C code. Future ISA's could very well bring features that make it easier for Rust/Go/Haskell to be seen down to the bare metal and improve performance thereby.

    Originally posted by jacob View Post
    When this happens, it's in fact an excellent outcome because you are served a proof that your spec is flawed and why. It's much preferable to the situation that happens routinely in C where a spec is happily implemented without anyone realising that it's flawed (because the flaw is usually non-obvious) and then you deal with bugs, vulnerabilities, race conditions and a code base that no-one dares to touch because it's a notorious minefield.
    I'm doubtful Rust is going to catch protocol-level bugs like blurtooth or the EMV prococol flaws such as the pin bypass MtM that was recently found. But having I do see the benefit of patching a more organized and safer code space to more quickly issue patches.

    Originally posted by jacob View Post
    Provability around memory safety, error handling and data races is certainly a major benefit for a kernel of all things.

    Another benefit can be performance. C is not really the hyper efficient language that many people believe and the Rust memory model allows a compiler to perform various optimisations that are difficult or impossible in C, including autovectorisation, struct alignment tricks (or outright destructuration), autoparallelisation etc. You could say that at the moment Rust hasn't reached its full potential in that area; in most cases its runtime performance is somewhere around C or C++ (depending on the case) but the promise is there.
    And Redox is certainly an interesting project. As noted before, I'm cautiously optimistic about it.



    Leave a comment:


  • 60Hz
    replied
    Originally posted by ssokolow View Post
    Because LLVM's optimizers were developed for C and C++.

    Even when they should support something Rust can do much better, it's tricky.
    Typical Rustbot excuses. This is just the "sufficiently smart compiler" argument.

    Leave a comment:


  • jacob
    replied
    Originally posted by ssokolow View Post

    *nod* Because LLVM's optimizers were developed for C and C++.

    Even when they should support something Rust can do much better, it's tricky. For example, the Rust developers keep having to postpone attempts to turn on marking things noalias in the LLVM IR they emit because they keep revealing bugs in the LLVM optimizers that the much sparser use of restrict in C and __restrict__ in C++ don't trigger.
    That's one issue; another one is that certain transformations need to be done at a higher level and are simply not implemented in rustc at this stage. Now that the MIR is in place though, a number of things becomes at least possible.
    Last edited by jacob; 17 April 2021, 08:58 PM.

    Leave a comment:


  • ssokolow
    replied
    Originally posted by jacob View Post
    You could say that at the moment Rust hasn't reached its full potential in that area; in most cases its runtime performance is somewhere around C or C++ (depending on the case) but the promise is there.
    *nod* Because LLVM's optimizers were developed for C and C++.

    Even when they should support something Rust can do much better, it's tricky. For example, the Rust developers keep having to postpone attempts to turn on marking things noalias in the LLVM IR they emit because they keep revealing bugs in the LLVM optimizers that the much sparser use of restrict in C and __restrict__ in C++ don't trigger.

    Leave a comment:


  • jacob
    replied
    Originally posted by WorBlux View Post
    Yes the invariants make it easier to reason about Rust code. but that's not to say you can't construct certain proofs around C or even assembly to prove a match to an specification. (As the L4 kernel does). And then there are tools like Valgrind or infer, which can catch a lot of mistakes.
    That by itself is not a vindication of C. You can also prove some invariants (to an even lesser degree) in pure assembly. If your goal was strictly to do OOP, the fact that you can use things like GObject in C doesn't mean that Python is not an infinitely better OOP language.

    In other words, I'm not aware of anything that can be proven about C code that cannot be proven more easily in Rust. Of course Valgrind etc. can be, an are, used for Rust as well.

    I'm not hating C here; for decades it was the only language other than assembly that could actually be used for low level systems programming. But we should beware of the Baby Duck syndrome; the fact that something was done in C for lack of other options doesn't mean that C is somehow ideal for that or that it can't be superseded. I can see a role for C well into the future by the way; while unsafe Rust can be used for extremely low level code (say, a context switch), it's just easier and more straightforward to write it in C. When writing C, one can essentially "see through" and visualise the underlying assembly that the compiler will generate, not so in Rust (at least I can't). But on the other hand, the Linux kernel also handles various highly complex data structures and sophisticated algorithms and those would definitely be better written in Rust.

    Originally posted by WorBlux View Post
    And you could also end up in a situation where you implement a specification properly in Rust, only to find out that the specification itself is flawed in some way. As you said, there's no silver bullet.
    When this happens, it's in fact an excellent outcome because you are served a proof that your spec is flawed and why. It's much preferable to the situation that happens routinely in C where a spec is happily implemented without anyone realising that it's flawed (because the flaw is usually non-obvious) and then you deal with bugs, vulnerabilities, race conditions and a code base that no-one dares to touch because it's a notorious minefield.

    Originally posted by WorBlux View Post
    I'm not really sure of what the proper role of Rust in the Linux kernel is, but it's certainly intriguing. Time will tell how well it can be integrated.
    Provability around memory safety, error handling and data races is certainly a major benefit for a kernel of all things.

    Another benefit can be performance. C is not really the hyper efficient language that many people believe and the Rust memory model allows a compiler to perform various optimisations that are difficult or impossible in C, including autovectorisation, struct alignment tricks (or outright destructuration), autoparallelisation etc. You could say that at the moment Rust hasn't reached its full potential in that area; in most cases its runtime performance is somewhere around C or C++ (depending on the case) but the promise is there.

    Leave a comment:


  • WorBlux
    replied
    Originally posted by jacob View Post

    No silver bullet exists or can exist, but in Rust it is possible to statically prove certain invariants (including, but not only, memory related) that cannot be proven in C. Heuristics are better than nothing but essentially in this case they will take you at best somewhere on the level of C++ >= 11.
    Yes the invariants make it easier to reason about Rust code. but that's not to say you can't construct certain proofs around C or even assembly to prove a match to an specification. (As the L4 kernel does). And then there are tools like Valgrind or infer, which can catch a lot of mistakes.

    And you could also end up in a situation where you implement a specification properly in Rust, only to find out that the specification itself is flawed in some way. As you said, there's no silver bullet.

    I'm not really sure of what the proper role of Rust in the Linux kernel is, but it's certainly intriguing. Time will tell how well it can be integrated.

    Leave a comment:


  • programmerjake
    replied
    Originally posted by piotrj3 View Post

    Personally i just find it a beauty of C. Converting in head C to ASM and viceversa is quite easy. Converting Rust or C++ is hard. Still most of time you do not need such knowdlege, eg. I find Rust perfect language to write something like parser.
    Well, any of Rust, C, or C++ all become very non-1:1 with the resulting assembly when using any decent modern optimizing compiler (such as gcc, clang, icc, or msvc).

    Example of C with clang 12:
    https://gcc.godbolt.org/z/6TPo4fEjT
    Last edited by programmerjake; 16 April 2021, 02:19 PM.

    Leave a comment:


  • piotrj3
    replied
    Originally posted by JustRob View Post
    As a C programmer I resist the idea of adding Rust, when I can literally predict the resulting assembly language of my C program; due to my safe programming practices, and avoidance of ill formed code.

    But when I try to argue against Rust I find things such as Oso Polar which bolster its inclusion; still I resist.
    Personally i just find it a beauty of C. Converting in head C to ASM and viceversa is quite easy. Converting Rust or C++ is hard. Still most of time you do not need such knowdlege, eg. I find Rust perfect language to write something like parser.

    Leave a comment:


  • oleid
    replied
    It will be interesting to see how this will change once gcc-rs enters the ring. Also, the quality of debug builds should increase when the cranelift backend lands. It will replace llvm for debug builds eventually.

    Leave a comment:

Working...
X