Announcement

Collapse
No announcement yet.

The First Rust-Written Network PHY Driver Set To Land In Linux 6.8

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by ssokolow View Post

    Funny. I generally find it pretty easy to write Rust projects where libc is the only C dependency and, because of that, making fully statically linked musl-libc builds requires no toolchain setup beyond rustup target add x86_64-unknown-linux-musl.

    ...and the only reason it needs musl-libc at all is because they decided it wasn't worth it to have a separate Linux syscall backend when they'd need a POSIX libc backend anyway for the BSDs, where libc is developed as "part of the kernel which happens to run in userspace". (To the point where OpenBSD will assume your process has been subject to an ACE exploit and kill your process if you try to syscall without going through libc.)
    your discussion is way over my head, but the redox project was rewriting the libc library in rust here


    If I remember right the biggest obstacle to adopting this in general is that rust doesn't support the same variety of hardware targets as c does and not the languages ability to get the job done.

    Comment


    • Originally posted by rommyappus View Post
      your discussion is way over my head
      Basically, because the kernel is responsible for loading and executing binaries, it has control over where libc gets mapped into the process's address space and whether that memory is read-only.

      Likewise, because of how system calls work, the kernel can see where in memory the code doing the system call is located.

      OpenBSD makes a note of where in memory each process has libc loaded and, when you make a system call, it checks if the code dispatching the system call is located within the read-only memory region that is pointed at libc.

      It's sort of like how recent Intel and AMD CPUs have a feature named Shadow Stack where the CPU itself can keep a backup copy of the function-calling entries in the stack, hidden in memory inaccessible to the program, that gets updated whenever you make function calls with CALL and cross-checks when you return from them with RET so that, if an exploit overwrites part of the stack, the function return pointers in the two stacks don't match and the CPU can hand off to the kernel's "this program has done something bad and needs to be killed" handler.​

      Originally posted by rommyappus View Post
      , but the redox project was rewriting the libc library in rust here


      If I remember right the biggest obstacle to adopting this in general is that rust doesn't support the same variety of hardware targets as c does and not the languages ability to get the job done.
      *nod* I never said that Rust couldn't have a special libc-bypassing backend for Linux... just that they don't consider it worth the effort when it's so trivially easy to statically link musl-libc if you have a pure Rust binary that you want to build for Linux and they already need a libc-based backend either way for other Unixoid OSes.

      Comment


      • Originally posted by rommyappus View Post

        your discussion is way over my head, but the redox project was rewriting the libc library in rust here
        for rust, there is Eyra which uses c-gull for libc

        Comment


        • Originally posted by stormcrow View Post

          Sorry, but no one ever said that while holding a straight face for long even back then. The only time I ever heard that was from kids who thought they were 133t writing assembly on their little PC or other toy home (yes they really were toys in comparison) microcomputers. Assembly for those ancient business and scientific systems were extremely specific for each computer they were written for (much like Intel/MASM syntax assembly doesn't run on non-x86 systems without massive effort for any complex program). Assembly wasn't more performant than Fortran in many cases and a hellaciously more painful to write and debug (substitute COBOL for Fortran in certain businesses). Higher level languages like C were embraced as a practical way to get software for computer A to run on computer B with relatively minimal effort. Portability was a major CS problem in the day, security was not. These systems had limited numbers of operators, behind locked doors, while networking was limited to unreliable phone lines, modems, and equally unreliable dedicated subscriber lines. Likewise, a new shift towards addressing the mistakes and limitations of the past, currently epitomized by both C and C++ but also assembly language and other languages that lack robust integrity guarantees, is now necessary. Give it up. Rust and other safe(r) languages are here to stay because the science of information security dictates that humans cannot be trusted to not make mistakes. The evidence is considerable that both users and programmers making flawed assumptions about users, problem domains, and the code they write are the primary problems in information security. There is no such thing as "perfect" code, not even with language model assisted review. Code does not and never will document itself fully. It is literally impossible to write any complex program in C, C++, assembly, etc without making mistakes that will inevitably have security implications. It's also literally impossible to encompass all input in a safe way in any Turing complete language so there's always going to be logic flaws and unaccounted for weird machine states until such time as new PLT paradigms are discovered. Which means, folks, Rust is not going to be the end game of programming languages, either. When someone comes up with something new that addresses Rust's problems and failings, it too will be replaced and the dumb argument now going on about C et al will happen with Rust... and it'll be just as stupid then as this one is now.
          The only way is forward. If C cultists want to stay where they are, they can simply be left behind.

          Comment


          • Originally posted by ssokolow View Post

            Basically, because the kernel is responsible for loading and executing binaries, it has control over where libc gets mapped into the process's address space and whether that memory is read-only.

            Likewise, because of how system calls work, the kernel can see where in memory the code doing the system call is located.

            OpenBSD makes a note of where in memory each process has libc loaded and, when you make a system call, it checks if the code dispatching the system call is located within the read-only memory region that is pointed at libc.

            It's sort of like how recent Intel and AMD CPUs have a feature named Shadow Stack where the CPU itself can keep a backup copy of the function-calling entries in the stack, hidden in memory inaccessible to the program, that gets updated whenever you make function calls with CALL and cross-checks when you return from them with RET so that, if an exploit overwrites part of the stack, the function return pointers in the two stacks don't match and the CPU can hand off to the kernel's "this program has done something bad and needs to be killed" handler.​
            Thank you for the extra description. That makes the whole thing make sense. Also, My comment on rust being capable was directed towards kpederson, not you, since he seems to think rust is incapable of doing libc work as a language. Though with the info you provided.. his comment makes some amount of sense in practice.

            Comment

            Working...
            X