I wonder if they can bundle in the on disk changes required to remove the write hole RAID 5/6?
Announcement
Collapse
No announcement yet.
On-Disk Format Changes Ahead To Improve "Painful" Parts Of Btrfs Design
Collapse
X
-
Originally posted by avem View PostI still don't understand why people consciously use btrfs and zfs instead of tried and proven LVM + ext4. Yes, snapshots, in my 30 years of computing I've needed them ... 0 times.
I cannot recommended xfs until they implement partition shrinking.
- Likes 5
Comment
-
Originally posted by kbios View Post
LVM gives you snapshots by the way, I use them to make consistent backups with rsync.
Let's say you add a new storage device to a system. You update fstab. You take a snapshot. You update crypttab to use the new device. Something happens, and you need to restore from the snapshot. Suddenly, your restored crypttab is referring to an old device. Oops.
Absolutely obvious when it is made clear with the above example. How could anyone be so stupid. But applications open multiple files all the time, and unless you can guarantee all your transactions are atomic, filesystem snapshots do not guarantee file content consistency.
One of the reasons (SQL) databases are popular is that you can (in principle) ensure all transactions are atomic. You just have to hope the programmers have identified all the transactions correctly (hint: it is hard), so the only way you can be reasonably sure you have captured a nice backup is by shutting down all the applications cleanly*, then the system, booting into an OS purely for taking a backup, take your backup and restart. Anything else is relying on other people doing their jobs properly all the time. Years of experience show that is a fool's game.
Don't get me wrong. Taking a snapshot of a well defined and implemented application can work - you just have to be aware of possible gotchas. There are one or two.
*Cacheing strategies on 'non-volatile' storage hardware can also ruin your day. Storage devices can and do lie about whether data has been committed to non-volatile storage or not. Data that you think is on spinning rust can in fact be in an all-too-volatile on-device RAM cache.
Last edited by Old Grouch; 12 November 2021, 08:57 AM.
- Likes 4
Comment
-
How would this on-disk format change actually happen in practice? Would it require manual intervention by the user? (I understand that it will need to be explicitly enabled via mount flags at first, but I'm referring to once it passes the experimental phase.) And what happens if the system is running Btrfs on / and it gets booted with a newer kernel that includes this new Btrfs version? Would it 1) automatically and irreversibly convert to the new format and mount or 2) refuse to mount and thus not allow the OS to boot, requiring booting a rescue system to convert the format manually?
Another question: Are these patches just proposed, or have they already been merged?
Comment
-
-
Originally posted by Old Grouch View Post*Cacheing strategies on 'non-volatile' storage hardware can also ruin your day. Storage devices can and do lie about whether data has been committed to non-volatile storage or not. Data that you think is on spinning rust can in fact be in an all-too-volatile on-device RAM cache.
Hearing from the bad experience of people losing almost everything after power failure of btrfs drives / arrays, I am quite sure if I had picked btrfs, what they experienced would be my experience. Btrfs put too much trust into hardware conforming the specification and lack resilience in this kind of corruption. Meanwhile ext4 is more veteran and reliable in this regard. Meanwhile, in btrfs, "repairing" can become "erasing". And fanboys will blame the victim if the victim don't have spare drives and spare computers to do offline rescue.
Comment
-
Originally posted by billyswong View PostThis was my experience! Two years ago I upgraded my PC and started using SSD as my main drive. Once in a while, the filesystem would be found corrupted and require me a fsck and then reboot. This year I shared my issue here in this forum and someone suggested me to disble write-back cache. Problem solved.
Hearing from the bad experience of people losing almost everything after power failure of btrfs drives / arrays, I am quite sure if I had picked btrfs, what they experienced would be my experience. Btrfs put too much trust into hardware conforming the specification and lack resilience in this kind of corruption. Meanwhile ext4 is more veteran and reliable in this regard. Meanwhile, in btrfs, "repairing" can become "erasing". And fanboys will blame the victim if the victim don't have spare drives and spare computers to do offline rescue.
- BTRFS would told exactly which file were compromised
- BTRFS would warned very early that you SSD had problem (not after 1 year)
- Likes 3
Comment
-
Originally posted by billyswong View PostMeanwhile ext4 is more veteran and reliable in this regard.
Originally posted by billyswong View Postlack resilience in this kind of corruption.
Originally posted by billyswong View PostBtrfs put too much trust into hardware conforming the specification
- Likes 3
Comment
-
Originally posted by intelfx View PostThis work (well, not specifically this work, but the whole "extent tree v2" effort) is basically destroying the single killer feature of btrfs, that is, the ability to relocate extents at will (i. e. the btrfs balance operation, which leads into RAID reshape/restripe).
- Likes 1
Comment
Comment