Announcement

Collapse
No announcement yet.

Btrfs In Linux 4.4 Has Many Improvements/Fixes

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • wdomburg
    replied
    Originally posted by liam View Post

    While you've given yourself a nice out with your phrasing, I'd counter with sel4.

    There are limits to the scope of their proofs and the assumptions they make. Specifically, there are the 320 lines of assembly that you mentioned, but also the 1200 lines of boot code and in their handling of virtual memory, where they admit that "in this part of the proof, unlike the other parts, there is potential for human error".

    And while bugs in hardware and drivers, or deliberate attempts to compromise the system lay outside the scope of the code, real world systems can and must take these factors into consideration. This is where the qualifier of being "meaningful" and "valuable"
    come into play. A kernel which assumes bug-free hardware, bug-free drivers, normal operating conditions and no bad actors may be meaningful and valuable from a research standpoint, but is not operationally meaningful. It's like having an ideal unicorn herder.

    Leave a comment:


  • Lltp
    replied
    Originally posted by liam View Post

    They've logically proven certain aspects of their system. Both the spec and implementation (the later is the much more impressive accomplishment as they were able to hugely accelerate the process via tooling they developed), but they still make certain assumptions (the most important one is they assume the correctness of their assembly...
    And yet in practice, they are correcting bugs (please, really, have a look at their commit history). Why? Because to prove a program, you need original assumptions (e.g. "this function takes ASCII strings as input" or "setting this bit at one will activate the clock scaling of the GPU"). As long as these assumptions are correct, everything goes fine. But when they are not, things can become silly.

    Please keep in mind that proving a program is, at best, only the first step to reach a bug-free program. While all the efforts in that sense are definitely worthwhile, the programs is correct (i.e. work as intended) as long as the implicit and the explicit assumption/prerequisites are true/fullfiled. Again, look at the commit history of the project of you trust them: they do correct bugs.

    I didn't get that they actually proved part of their programs. Thanks for pointing that out. But I already spent some of my own (precious) time to actually look at the code and their commit history and give you an answer with that in mind. Most here wouldn't do the effort. But please don't ask me to read about everything I could about them, because I don't care. They might do a truely wonderful job, I'm not interested (just like, I trust, you are not interested in quantum physics). My point was: even proved programs can contain bugs and their commit history just demonstrated that.


    About hardware failure. What you said is true and actually proves my point. If each time your computer was stopped abruptly, you couldn't reboot it because the filesystem was not built with that in mind what would you say? "Fair enough, there is no way to deal with HW failure anyway"? Of course, I would love that BTRFS was proven! At least a whole set of bugs wouldn't happen! And at the same time, I really want my FS to be resilient to HW problems. These are fortunately not antagonist concepts: there is no trade-off except in complexity.

    Leave a comment:


  • Lltp
    replied
    Originally posted by liam View Post

    They've logically proven certain aspects of their system. Both the spec and implementation (the later is the much more impressive accomplishment as they were able to hugely accelerate the process via tooling they developed), but they still make certain assumptions (the most important one is they assume the correctness of their assembly...
    And yet in practice, they are correcting bugs (please, really, have a look at their commit history). Why? Because to prove a program, you need original assumptions (e.g. "this function takes ASCII strings as input" or "setting this bit at one will activate the clock scaling of the GPU"). As long as these assumptions are correct, everything goes fine. But when they are not, things can become silly.

    Please keep in mind that proving a program is, at best, only the first step to reach a bug-free program. While all the efforts in that sense are definitely worthwhile, the programs is correct (i.e. work as intended) as long as the implicit and the explicit assumption/prerequisites are true/fullfiled. Again, look at the commit history of the project of you trust them: they do correct bugs.

    I didn't get that they actually proved part of their programs. Thanks for pointing that out. But I already spent some of my own (precious) time to actually look at the code and their commit history and give you an answer with that in mind. Most here wouldn't do the effort. But please don't ask me to read about everything I could about them, because I don't care. They might do a truely wonderful job, I'm not interested (just like, I trust, you are not interested in quantum physics). My point was: even proved programs can contain bugs and their commit history just demonstrated that.


    About hardware failure. What you said is true and actually proves my point. If each time your computer was stopped abruptly, you couldn't reboot it because the filesystem was not built with that in mind what would you say? "Fair enough, there is no way to deal with HW failure anyway"? Of course, I would love that BTRFS was proven! At least a whole set of bugs wouldn't happen! And at the same time, I really want my FS to be resilient to HW problems. These are fortunately not antagonist concepts: there is no trade-off except in complexity.

    Leave a comment:


  • liam
    replied
    Originally posted by Lltp View Post

    How is that different from test-driven development? I mean, you essentially do the same thing: write specification and a criteria to be sure the result is in accordance with it (basically, a test). And you test it.

    Then comes the question of the interaction with faulty software or hardware, imprefect specification/test coverage and the other things other and I spoke above.

    The thing is: what they are doing is great. Bit the future may prove to be bitter if they really assume that there can't be any problem with their code.

    Back to the subject, the most problematic bugs with FS are the one appearing in case of hardware failure. Even if you try to think of every possible case, there is always this interaction between that disk chipset, that crappy power supply and that data bus chip that no one thought about that caused a "bug": missing files for instance.
    I think you need to read a bit more about it. I'm not trying to be dismissive, but the difference is kind of subtle yet massive. They're using proof checkers (a fairly automated manual one). They've logically proven certain aspects of their system. Both the spec and implementation (the later is the much more impressive accomplishment as they were able to hugely accelerate the process via tooling they developed), but they still make certain assumptions (the most important one is they assume the correctness of their assembly...BUT we're talking about 320 lines of code...a value small enough that you can actually follow all execution paths). They've really helped push forward the state of the art in the development of software.
    Now, hardware failure is something that can't really be dealt with in software in a general way (in certain simple cases, like with a bad disk controller or cable, you can detect problems).
    https://sel4.systems/Info/FAQ/#verif
    https://sel4.systems/Info/FAQ/proof.pml

    Leave a comment:


  • Lltp
    replied
    Originally posted by linner View Post

    I think there is a difference between a bug and issues caused by external forces (eg. active sabotage). Your examples are no different than saying it's impossible to have a bug-free program because the binary can be altered with a hex editor. Well, yeah...
    What would you call a bios not respecting the ACPI standards? (Phoenixbios?! That's mean )

    But what you wrote is true. However, if you interact with the outter world, you can't just assume that everything will go fine. What would you think if your web browser segfaulted at every packet loss? Who would you blame?

    Same thing goes for external libs, IPC mechanisms or hardware faults. A bitflip on disk or in RAM localized on the program in question is obviously OK: that's the role of the OS to identify that, not the program itself which is, well... not itself anymore. However if my internet connexion drops or if my disk returns garbage, the affected programs shouldn't behave in an unexpected manner or it is a bug.

    But with all that said, if you really limit yourself to unexpected behavior caused by the program itself (not hardware or other software), there are very few large ones which really have received no bug fix for a long time (or they are dead projects).
    Last edited by Lltp; 11 November 2015, 08:14 PM.

    Leave a comment:


  • Lltp
    replied
    Originally posted by liam View Post


    I was referring to this: https://sel4.systems/Info/FAQ/#l4vv
    So, for the kernel does what it was intended to do: a specification was created that was proven to "enforce integrity and confidentiality " and the implementation was verified to adhere to the specification. In that sense, it is bugfree.
    How is that different from test-driven development? I mean, you essentially do the same thing: write specification and a criteria to be sure the result is in accordance with it (basically, a test). And you test it.

    Then comes the question of the interaction with faulty software or hardware, imprefect specification/test coverage and the other things other and I spoke above.

    The thing is: what they are doing is great. Bit the future may prove to be bitter if they really assume that there can't be any problem with their code.

    Back to the subject, the most problematic bugs with FS are the one appearing in case of hardware failure. Even if you try to think of every possible case, there is always this interaction between that disk chipset, that crappy power supply and that data bus chip that no one thought about that caused a "bug": missing files for instance.

    Leave a comment:


  • linner
    replied
    Originally posted by Zan Lynx View Post
    And that does not include hardware errors which may be inherent in the design or happen via electromagnetic interference.

    One hacker took down a formally proved smart-card security chip by applying heat until it began to malfunction. And several fancy electronic biometric key locks have been defeated by inducing a signal into the unlock wiring through induction, bypassing the whole authentication system. And of course several attacks against passwords and hashes based on getting data via hyperthreading, cache misses, page faults, context switches and just timing information generally.
    I think there is a difference between a bug and issues caused by external forces (eg. active sabotage). Your examples are no different than saying it's impossible to have a bug-free program because the binary can be altered with a hex editor. Well, yeah...

    Leave a comment:


  • r3pek
    replied
    Well, I can only speak for myself. My current volume was created back in 2013 as a RAID 0 array (all my data is in there). Converted it to a RAID 5 array after adding one more disk to the system last year and is running happily since then till this day. Don't know about tomorrow

    Leave a comment:


  • Zan Lynx
    replied
    Originally posted by liam View Post

    While you've given yourself a nice out with your phrasing, I'd counter with sel4.
    If a program is small enough that you can simulate its every possible state then you can guarantee that it meets the specification. That means the program has to have well defined exit state or a continuous loop back to known start state or it hits the Halting Problem.

    That also requires that the specification is correct. One university project to produce programs proved via formal logic succeeded. But they had to revise their specification over 400 times. Showing that formal proof just moves the problem higher up. People don't always understand how to ask for what they actually want.

    And that does not include hardware errors which may be inherent in the design or happen via electromagnetic interference.

    One hacker took down a formally proved smart-card security chip by applying heat until it began to malfunction. And several fancy electronic biometric key locks have been defeated by inducing a signal into the unlock wiring through induction, bypassing the whole authentication system. And of course several attacks against passwords and hashes based on getting data via hyperthreading, cache misses, page faults, context switches and just timing information generally.

    Leave a comment:


  • SystemCrasher
    replied
    Originally posted by liam View Post
    So, for the kernel does what it was intended to do: a specification was created that was proven to "enforce integrity and confidentiality " and the implementation was verified to adhere to the specification. In that sense, it is bugfree.
    Sure thing. Feel free to use it instead of Linux, Windows, MacOS, etc. There is one small catch though: this kernel is nearly useless on its own. And when you'll add all hardware drivers, code to deal with advanced filesystems and all sorts of gooies people expect from modern operating system, you will suddenly figure out system haves so many possible states and formal specification is so long and complex that you can't prove anything about it and it ends up full of bugs. SEL4 guys just cheated a bit and avoided all hard parts, because it is microkernel.

    Sure thing, even mediocre student can code more or less bug-free task switcher which isn't far from SEL4 in terms of features and bugs. But I wish you luck to implement all features Linux haves today, but without bugs. I've also heard stories it should be easier to write drivers for microkernel OSes. I've heard it for something like 20+ years or even more. And now.... where is Minix? Where is Hurd? And where is Linux? So, it seems it wasn't easier when it comes to practice. I bet same thing happens to bugs as well.

    Leave a comment:

Working...
X