Originally posted by Chugworth
View Post
Announcement
Collapse
No announcement yet.
Btrfs RAID 5/6 Sub-Page Support Readied For Linux 5.19
Collapse
X
-
Originally posted by commodore256 View PostDoes this mean Raid5/6 won't melt your Btr?
Considering RAID has been just as broken for a decade now, I've stopped hoping that it'll ever get fixed. I don't even think this is so much a "fix" to RAID as it is the porting of a separate feature to it, while leaving it just as broken as it was before.
Comment
-
Originally posted by Developer12 View Post
At this point I think it's very much the other way around. With ZFS landing the ability to add additional disks to a RAID, I don't see many feature reasons to prefer BTRFS.## VGA ##
AMD: X1950XTX, HD3870, HD5870
Intel: GMA45, HD3000 (Core i5 2500K)
- Likes 1
Comment
-
Originally posted by Developer12 View Post
This exists on ZFS in the form of L2 ARC. The ability to expand ZFS RAIDs with additional disks is also nearing completion (final testing and review).Last edited by mdedetrich; 05 May 2022, 07:26 AM.
- Likes 1
Comment
-
Originally posted by Chugworth View PostIf Btrfs could ever get reliable RAID5/6 and native encryption, then I wouldn't have much need for ZFS anymore. Although still the Btrfs implementation of send/receive does make me a little nervous in that it seems to be working at the file level, whereas in ZFS it's working at the block level. That also makes me think that if Btrfs ever did get encryption, it wouldn't be as slick as the ZFS implementation. But as it stands they're two different tools with two different uses. Btrfs works great as a root filesystem in a single disk or mirrored scenario, and ZFS works great as a data storage filesystem.
Comment
-
Originally posted by plantroon View Post
I thought the same and took btrfs for a spin in my NAS mirror. It is miles behind ZFS. Even something as simple as mirror of 2 disks is implemented in such cumbersome way, that I had hard time grasping wtf I was doing. ZFS was probably the simplest and most powerful piece of software J ever laid hands on. Even if they catch up I'll not use it for something where I care about the data and being able to maintain the pool in the long run. I am not saying that btrfs is bad but it cannot compete with ZFS at this point and other than the licensing issues (including necessity to use dkms module) there's never been a reason to choose btrfs over zfs.- Rename the active root subvolume
- Create a new root subvolume from an existing root snapshot
- Set the new root subvolume as the default subvolume
- Copy the correct boot files for that snapshot in case there were changes
- Reboot
- Delete the old root subvolume
Comment
-
Originally posted by Chugworth View PostZFS has a very simple and organized approach for keeping snapshots, whereas in Btrfs each snapshot is basically treated as its own subvolume and can be moved around like a directory. There are advantages and disadvantages to both ways. The Btrfs approach comes in handy for a root filesystem. If I want to rollback to an earlier root snapshot, then while the system is active I can just:- Rename the active root subvolume
- Create a new root subvolume from an existing root snapshot
- Set the new root subvolume as the default subvolume
- Copy the correct boot files for that snapshot in case there were changes
- Reboot
- Delete the old root subvolume
Btw, ZFS has dataset's snapshots available at the root of each dataset in special .zfs directory.
I hate how mounting works in btrfs when you have a mirror though, that was my biggest gripe with it.
Comment
-
-
Originally posted by plantroon View Post
[...] Even something as simple as mirror of 2 disks is implemented in such cumbersome way, that I had hard time grasping wtf I was doing[...]
1) create a raid1 filesystem from scratch:
Code:# mkfs.btrfs -d raid1 -m raid1 /dev/sda /dev/sdb
Code:# btrfs dev add /dev/sdb /mnt/btrfs-filesystem # btrfs bal start -dconvert=raid1 -mconvert=raid1 /mnt/btrfs-filesystem
- Likes 2
Comment
-
Originally posted by ferry View Post
Oh, is that documented somewhere? I seem to find only questions if mixed pools have trim support. You did get that I mean effectively ssd speeds with hdd storage size?
Originally posted by Developer12This exists on ZFS in the form of L2 ARC.
ZFS does not have a write-cache in the traditional sense, so tweaking a bunch of ZIL parameters allowed me to trick ZFS into allowing bursts of data writes large enough for my use cases. YMMV.
- Likes 1
Comment
Comment