No announcement yet.

Microsoft's ReFS File-System: Competitor To Btrfs?

  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by drag View Post
    No, you can't. You are absolutely incorrect with that assessment.

    The ZFS features are useful for increasing availability and integrity of available data. These features are needed more and more as you deal with larger and larger data sets. In a year or two those features are going to be the bare minimum requirements for dealing with average sized company's data. Ext4 or XFS, in this respect, is not adequate. They are not a substitute for backups.

    If your playing around with your own data then that is your own risk. If you think you can get away with this attitude in a professional environment then you are a menace to your employer's data.

    With proper backups your data is safer with Fat32 file system then with ZFS and no proper backups.
    If it was not obvious enough: With zfs send receive backups I do of course mean that you send these backups to another (maybe offsite) machine.


    • #22
      Microsoft has been talking about new and awesome file systems for their 'next' windows since 1996.

      The last time for 'longhorn' - now known as Vista.


      • #23
        Originally posted by Goderic View Post
        If it was not obvious enough: With zfs send receive backups I do of course mean that you send these backups to another (maybe offsite) machine.
        Yeah if you take advantage of that to do your backups then that is perfectly and 100% acceptable. Much better then the old fashion 'dump' command to back up your file system to tape or whatever.

        preferably offsite and onsite, of course.


        • #24
          Originally posted by kobblestown View Post
          I think I'll just bite the bullet and move to Btrfs after Ubuntu 12.04 is out. Btrfs holds a great promise, I only wish it's maturing faster.
          not unless you like data loss and root reinstalls


          • #25
            Speaking of btrfs (or other filesystems) maturing, I'm thinking perhaps we aren't approaching this sort of stuff the best way. Filesystems are supposed to exhibit a certain degree of reliability, but it isn't clear to me how the current development methods ensure or even assess that.

            Given the costs and risks associated with filesystem corruption, along with the lengthy process of ironing bugs out, maybe diving right into implementing an in-kernel filesystem isn't really useful. Performance is secondary to correctness, especially in the early development stages when you can afford to ignore certain aspects of the former.

            What I'm saying is perhaps certain formal verification techniques are cost-effective in this scenario and may allow us to actually say something about reliability (unlike the test of time, as usually done). For example, we could start by implementing a high-level specification in a theorem prover and proving conceptual correctness, then progressively refining that to a FUSE-based or even in-kernel implementation. Usually that's too time-consuming (but so is waiting out the bugs) and too much a pain to make even minor changes for many applications to consider. But I think this could be worthwhile in this case, given the core structure and algorithms can be designed in early on and don't need to change as much.

            So at least a certain class of bugs, mainly design errors, can be ruled out with a good degree of certainty. The question is whether we can reasonably extend that to a C implementation and partially model certain things we need about the kernel (e.g. threading, synchronization) without ending up with a proof that's fragile with respect to API changes, since ideally most of this translation should be machine-checked. I'm hoping somebody figures out a compromise or a sane way to apply this process in an existing, large codebase such as Linux, even if only in certain areas.