Announcement

Collapse
No announcement yet.

That Didn't Take Long: KSMBD In-Kernel File Server Already Needs Important Security Fix

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • S.Pam
    replied
    Originally posted by arQon View Post

    If simply USING it is a major part of your job, you don't need to worry about it.

    The issue is that it's apparently massively more complex than SMB3, i.e. it's a nightmare to IMPLEMENT. I don't remember the source, but the number I saw was ~10x the amount of code or worse.
    That's only if you need to setup Samba as a Domain Controller. Not needed for normal file sharing.

    Leave a comment:


  • arQon
    replied
    Originally posted by Moscato View Post
    Can you explain how SMB4 is a nightmare?
    If simply USING it is a major part of your job, you don't need to worry about it.

    The issue is that it's apparently massively more complex than SMB3, i.e. it's a nightmare to IMPLEMENT. I don't remember the source, but the number I saw was ~10x the amount of code or worse.

    Leave a comment:


  • Moscato
    replied
    Originally posted by arQon View Post

    No, you really do. We've DONE this already, getting samba to sustain GbE line rate on incredibly shitty hardware (as in dual-core 600MHz ARM with no RAM levels of "shitty"). Once you have zerocopy, there's almost nothing that actually *needs* to be in kernelspace because you just aren't making enough roundtrips for it to matter. Electing to massively increase the attack surface of the kernel for the sake of wringing the last few trivial drops of +perf out of something like this is simply a bad decision.

    It's not like Linus et al are idiots, and they're the ones who approved this, but IMO that doesn't make it any less of a poor choice. I can understand being motivated to do it by history (e.g. "NFS is in the kernel already" etc), despite that being from a time where there were far fewer attacks taking place and the kernel was 1/1000th of its current size; or to win a benchmark against Windows Server etc, but it's still not the choice I'd have made: not least because eventually someone is going to argue that SMB4 "needs to" be in the kernel too, since NFS and SMB3 are, and SMB4 is a @#$%ing nightmare...

    Still, at least THIS one was caught early.
    Can you explain how SMB4 is a nightmare?

    I'm deeply curious about this one, as samba is a major part of my job

    Leave a comment:


  • zxy_thf
    replied
    This looks odd to me, considering openat2 already has RESOLVE_BENEATH.
    Does RESOLVE_BENEATH have no counterpart in kernel and implemented purely in userspace?

    Leave a comment:


  • ssokolow
    replied
    Originally posted by mdedetrich View Post
    apparently it doesn't support unloading
    I believe that's because such a plugin system would use dlopen or equivalent and POSIX doesn't require that dlclose be thorough enough to be suitable for unloading and reloading plugins. (I remember reading somewhere that you're out of luck if you want runtime unloading of dlopen-based plugins on macOS, for example.)

    Don't quote me on this, but I don't think stable_abi imposes a specific loading mechanism... I think it's just an abstraction for doing Rust-to-Rust FFI on top of the C ABI. (If that's true, then you could get unloadable modules just by implementing your own dlopen alternative.)

    EDIT: Yeah. Apparently it's enough of a thing that the Objective-C runtime will actually dlopen the library a second time to protect you from yourself.
    Last edited by ssokolow; 20 September 2021, 10:47 AM.

    Leave a comment:


  • mdedetrich
    replied
    Originally posted by pininety View Post

    While I agree with you, I always have to wonder if it is not possible to build both. For example, if we were able to design a protocol which can be either be transparently compiled out (i.e. no performance overhead) if you want a monolithic kernel or if you want a microkernel allow different modules to be in userspace and only communicate via this protocol, you could have both. Testing would also become much more easy. But I guess such a magic protocoll is impossible.
    sel4 did this. By protocol you are kind of implying microkernel with message passing however sel4 does some optimizations which make the fastest microkernel to date. You can check their IPC mechanism here https://docs.sel4.systems/Tutorials/ipc.html . You can read their general page at https://docs.sel4.systems/projects/s...questions.html

    I guess another compromise would be too have a monolithic kernel but write it in a language that has much better guarantees (such as Rust) and implement various components as dynamic libraries but for this to work you would need a stable ABI, Rust has one bit it is more limited than C https://docs.rs/abi_stable/0.10.2/abi_stable/ (apparently it doesn't support unloading).

    Leave a comment:


  • pininety
    replied
    Originally posted by intelfx View Post

    Reality is that for Linux, this train has already sailed. It's a firmly monolithic kernel, for better or worse.
    While I agree with you, I always have to wonder if it is not possible to build both. For example, if we were able to design a protocol which can be either be transparently compiled out (i.e. no performance overhead) if you want a monolithic kernel or if you want a microkernel allow different modules to be in userspace and only communicate via this protocol, you could have both. Testing would also become much more easy. But I guess such a magic protocoll is impossible.

    Leave a comment:


  • mdedetrich
    replied
    Originally posted by sdack View Post
    But more seriously, I am surprised that this bug slipped through. There should be an automated test for all of the kernel to check for the use of uncanonicalized paths. There is a lot more code, which can have the same issue, and it is indeed a common attack vector. This type of bug should not need to be checked by hand for every patch submission any longer.
    Yeah I don't know precisely how kernel contributions are done, but this kind of code is typically always covered by a test and if no test is given then the code is basically refused until tests are provided. Its not only checking that the code is actually working when submitted but its also to prevent regressions when refactoring/maintenance is done.

    This seems like very low code quality standards to me :/

    Leave a comment:


  • sdack
    replied
    Originally posted by Etherman View Post
    It's not mandatory. Just say <n> if you don't need it.
    This, so much. I do not get all them idiots, control freaks and jerks, whose only idea is to take it away for everyone instead of leaving users a choice. These people should not be allowed to post ... Oh, wait, it would be ironic to deny them the choice of talking shit on the Internet, wouldn't it?

    But more seriously, I am surprised that this bug slipped through. There should be an automated test for all of the kernel to check for the use of uncanonicalized paths. There is a lot more code, which can have the same issue, and it is indeed a common attack vector. This type of bug should not need to be checked by hand for every patch submission any longer.
    Last edited by sdack; 20 September 2021, 06:02 AM.

    Leave a comment:


  • sverris
    replied
    Originally posted by ssokolow View Post

    Yes, but code that doesn't need to be fixed is better than code that's fixed quickly.
    I tend to agree...

    Leave a comment:

Working...
X