Originally posted by evert_mouw
View Post
Announcement
Collapse
No announcement yet.
Bcachefs Prepares Last Minute Fixes For Linux 6.7
Collapse
X
-
Originally posted by EmanuC View Post
Comment
-
Originally posted by evert_mouw View PostSeems like there still is a lot to fix, but also the speed of fixing seems high. Give it yet another year and Btrfs has real competition. (I don't expect the ZFS userbase to switch, but in-kernel support might convince a few.)
- Likes 3
Comment
-
Originally posted by evert_mouw View Post
Yes, fscrypt support (not purely native but close enough).
Correct me if I'm wrong, but bcachefs encryption is for the entire device or you encrypt all or nothing.
With encryption coming to Btrfs with fscrypt you can encrypt individual subvolumes, you can encrypt each user's home.​
- Likes 3
Comment
-
Originally posted by EmanuC View Post
Can you explain better?
Correct me if I'm wrong, but bcachefs encryption is for the entire device or you encrypt all or nothing.
With encryption coming to Btrfs with fscrypt you can encrypt individual subvolumes, you can encrypt each user's home.​
Comment
-
-
I am biased towards btrfs and i would like to challenge the btrfs haters to come up with whats actually wrong with btrfs.
Yes it is slow for some things, "raid 5/6" should never have been added so early and it sucks that per subvolume data/metadata profiles are not yet possible. But that's is actually all luxury problems.
"Raid" 0/1/c2/c3 and 10 work great and do compare what your get (for free) with any other filesystem and see if you can beat features/reliability.
I wish bcachefs all the best, but there is no way I am switching. Remember that as bcache evolves, so do btrfs. Perhaps there will be a time when bcachefs is worth switching to, but for now and probably many years ahead btrfs is what I trust with my data.
http://www.dirtcellar.net
- Likes 4
Comment
-
Originally posted by waxhead View PostI am biased towards btrfs and i would like to challenge the btrfs haters to come up with whats actually wrong with btrfs.
Yes it is slow for some things, "raid 5/6" should never have been added so early and it sucks that per subvolume data/metadata profiles are not yet possible. But that's is actually all luxury problems.
"Raid" 0/1/c2/c3 and 10 work great and do compare what your get (for free) with any other filesystem and see if you can beat features/reliability.
I wish bcachefs all the best, but there is no way I am switching. Remember that as bcache evolves, so do btrfs. Perhaps there will be a time when bcachefs is worth switching to, but for now and probably many years ahead btrfs is what I trust with my data.
Also I've been thinking about a delayed Raid c2, aka since one most likely are always using two similar drives from the same batch and with the same characteristics there is a huge chance that both will die quite close in time so I think there could be a benefit from a c2 type of raid 1 where the duplication is delayed (and for SSD:s also batched/deduped) so the drives would experience a large difference in the number of writes.Last edited by F.Ultra; 02 January 2024, 01:04 PM.
Comment
-
For a good while, I'll be using BCacheFS for my tiered gaming array, BTRFS for my OS install drive on my computers and laptops, and ZFS for my NAS.
I just don't feel the need to use just one FS for everything, but if I had to get rid of one, it's ZFS due to the lack of inclusion in the kernel. Even then, I might just move my NAS to Debian so that it's not a huge issue keeping the NAS software in a good place.
They're all COW and operate on many similar principles. Just different features, priorities, and years of maturity. All great filesystems in my view.
If one of them has a big feature missing though, I think BCacheFS scrub and self-healing would be the biggest. Not a huge issue for what I'm going to use it for, but that's not ideal for some use cases. The ZFS version has saved my bacon plenty of times. I think Kent has scrub and healing on his list to tackle.
- Likes 1
Comment
-
Originally posted by F.Ultra View Postsince one most likely are always using two similar drives from the same batch and with the same characteristics there is a huge chance that both will die quite close in time
- Likes 2
Comment
Comment