If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.
A working fsck thanks. One that just reports there is an error but refuses to do anything about it, is not really usefull. Losing an entire filesystem of data is not acceptable just because btrfs fucks up and refuses to try to access anything unless everything is perfect.
BTRFS is arriving now and it has the huge advantage of being shipped and maintained directly in the Linux kernel which means it will likely quickly become the de facto default file system on Linux once it has been tested in the wild (like here with SUSE)
BTRFS is still breakable in a number of ways which ZFS simply shrugs off if you attempt the same things on that FS.
2 years later it's still not being shipped as default on any distribution.
So they need to act now and try to make it as painless as possible to use ZFS on Linux, which I think is what they are trying to do with ZFSOnLinux.
It is painless already. Seriously.
I don't think it will work though. The advantage of being able to be shipped in the kernel is just too big, so I believe BTRFS will become the standard on Linux and ZFS will remain largely confined to the BSD's with little to no Linux presence to speak of.
ZFS can never be shipped in the kernel. The CDDL is specifically written to be incompatible with GPL.
HOWEVER, that doesn't stop downloading zfs binaries dynamically during the setup process (in the same way that flash and microsoft codecs get pulled down dynamically) and installing them - and there are enough advantages in ZFS over BTRFS that it's worthwhile considering doing this with an installer.
My opinion: If it wasn't for the CDDL, ZFS would already be in widespread use on Linux and BTRFS development would probably be stagnating. The issue isn't technical, it's political.
BTRFS was coming "real soon now" for more than a decade, whilst it took less than 18 months for ZFS to get from alpha-quality versions on Linux to production-ready - and ZFS has the advantage of over a decade of real-world deployment in large (expensive+critical) production systems behind it.
ZFS works from the design starting point of "Disks are unreliable crap. Deal with it" - ie, it expects failures (and data corruption) to be regular and not only handles them but attempts to automatically repair it (the online filesystem checking is useful too). Everything else treats drive problems as a "problem" which can't be repaired in a running system and only has rudimentary handling of data corruption (detects, but does not attempt to correct)
It would be nice if BTRFS and ZFS devs worked together, especially now that OpenZFS development is standardising the FS across *BSD, Solaris clones and Linux but I believe I'll see ballroom dancing yaks in the lobby of Grand Central Station before that happens.
Yes, but odds are you don't want to use it; it's not EXT4. See comment #3.
I've trashed test BTRFS rigs with simple power failures during write. Having to restore several TB from backups is a good sign this is not ready for heavy production use.
Having a fsck which can't actually repair FS damage is a bit like having a bucket with a hole in it.
If someone says a filesystem doesn't need fsck because $bad_thing will never happen then alarm bells should be going off(*). If they say it shouldn't need one but here's one anyway you're in a much better position.
(*) DEC's AdvFS developers said that repeatedly. Until V5 shipped, AdvFS gathered a reputation as a FS which taught admins the value of regular backups.
On the other hand I have _never_ lost data from ZFS test systems short of simulating multidrive total failures in excess of the redundancy levels (and even then, if the "missing" drives are plugged back in and ZFS restarted, the FS recovered. It's even coped with buggy SATA controllers which dropped commands)