Originally posted by kreijack
View Post
As stated above, in BTRFS a subvolume haven't be in fstab. I put it in fstab when I found useful to have a "bind". Anyway I don't see any big advantage to have the mountpoint as subvolume property instead of to be in the fstab (nor the opposite). In any case you have to set it. If it is done as
or
is the same thing. And about the moving, I don't see any problem to cut&paste several lines between different fstab..
Code:
echo >>fstab [.....]
Code:
zfs set mountpoint...
I am not a snapper fan. However it does a lot more than zfs-auto-snapshot: e.g. it support xfs/ext4 + lvm< it has some form of integration with yast; it support snapshot at login time... It is unfair to compare these two tools.
Embedding metadata into volumes allows apply vols ACLs (not files ACLs), NSF/SMB sharing, or delegations in those metadata without using any files.
When you will export pool and import it on another host (for example in active/standby cluster) all mount settings will be moved WITH that pool.
This is why using btrfs is so hard in such active/standby clusters setups.
I know that certain aspect of zfs are a lot better than btrfs (e.g. storage tiering and raid5/6). In other aspect BTRFS is superior (you can add, replace, remove device and reshape the filesystem easily). One limitation which seems strange to me is that ZFS doesn't support reflink, and also shrinking the filesystem has some limits.
IMHO these aspects are more valuable than the btrfs/zfs user command...
IMHO these aspects are more valuable than the btrfs/zfs user command...
ZFS has integrated caching which is 100% designed for ONLY ZFS. Using btrfs you must relay oon LVM/MD caching on bcachefs. All those technologies are not designed for btrfs and this is why ZFS beats btrfs in literally ALL tests/benchmarks.
Only that one difference that ZFS is using free lists and that each block of data has ctime is causing that for example on making decision about keep block as allocated or not on remove snapshot in the middle ZFS needs TWO compares (on compare of the ctime of the bloc with next snapshot ctime and second one to compare block ctime with older snapshot).
ZFS uses internally the same technology which on Solaris and than On Linux started been used which is SLAB allocator. This is why adding to zpool new disk is like adding new DIMM of RAM. On such operation you don't need to reformat or repartition you RAM because new RAM is just added to free list. Try compare btrfs pool creation time with ZFS. pool creation. In case of btrfs you must create allocation matadata. In case of the ZFS new disk is just added to free list.
This is why snapshots operations in case of the btrfs are slower and slower with size of the allocated data in btrfs pool) when on ZFS that overhead is const whatever size of the of the zfs pool is and this is why ZFS snapshots operations are 100% deterministic operations.
Don't get me wrong. It is good that Linux has at least one OOTB fs with pooled storage, with COW and snapshots however design decisions made on btrfs development will allays put that FS waaaay behind ZFS.
ZFS is well integrated with boot providing OOTB support for boot environment where all metadata about separated cloned rootfs volumes are stored as well in volumes metadata.
btrfs still needs fsck and cannot autoreplace faulted disk out of that which is added to the pool as spare disk(s).
And try to think that with all what I've already described here I'm still not in the 50% of the functionalities which is still not possible to see in case of the btrfs.
Really try to use ZFS because it will be kind of eye opener

Comment