If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.
You can already give zfs a partition, or even just a file on another partition. It doesn't care and I don't think it ever did in the recent past
ZFS uses its own disk scheduler, ZIO, when it detects it's using a full disk instead of a partition. You have to set the disk scheduler to none (or noop on older kernels) when using partitions for ZFS's ZIO to take precedence. As far as I know, that's about as much as ZFS cares about disks and partitions.
I tested bcachefs and it works as expected and for my tests. When it is at a reliable and performance level similar to btrfs, I will transition to it. While I would prefer zfs, to btrfs or bcachefs, I am not able to dedicate a full disk to zfs. Give me a partition based zfs and I will be first in line to test same.
You can do this right now. When people create ZFS on root installs, they are partitioning the disk (e.g. one vfat for /boot/efi and another for the pool).
You can already give zfs a partition, or even just a file on another partition. It doesn't care and I don't think it ever did in the recent past
AFAIK ZFS has been happy to accept files instead of partitions/drives since before it even supported partitions/drives. During the prototyping stage it ran on using files as fake drives.
I've given ZFS partitions by partition label, so that when I do a "zpool status" I get a listing of human-readable names.
Is this cognitive dissonance? I kind of like the point of Linus' message, but it factually falls short. Because bcachefs has not eaten anyone's data. It has just been on involuntary borrowing until the implementation is complete. This is significantly different from a certain other COW FS that just eats your data at random, and if you happen to run out of disk space.
Eating your data is defined as "the FS is fucked, and there is nothing we can do" compared to bcachefs "the FS is fucked, we need to implement recovery and you get everything back"
perhaps to settle this cognitive dissonance, we can conclude that bcachefs did not promise stability. Just data security. And Linus only picked at bcachefs' stability. So both can be true.
Last edited by varikonniemi; 08 April 2024, 03:03 PM.
Is this cognitive dissonance? I kind of like the point of Linus' message, but it factually falls short.
Did you read the same message I did? The filesystem is literally marked EXPERIMENTAL. That means "not stable" in case that wasn't clear. Notably, there was an issue that resulted in a backported fix.
Anyways, the same reason why the fixes are able to make it in so quickly is because it is marked experimental.
You can already give zfs a partition, or even just a file on another partition. It doesn't care and I don't think it ever did in the recent past
Currently I have zfs datasets in a zpool created on a lvm2 logical volume which I haven't got round to moving the contents to a zpool on hardware.
I have used 2Tb files on a nfs v3 volume, mounted from a Dell OneFS appliance, as vdevs to create a zpool. Wasn't the fastest but worked. Synching local snapshots to this remote system was tolerably fast. As the exercise was to prove a point (about the OneFS service) it didn't have to be production quality.
You can consider zfs to be just a fancy file system that can reside on just about any reasonable storage.
The simplest use would be one partition, or logical volume, as a zpool's sole vdev with one zfs dataset. Quotas and reservations can ensure enough space is available for snapshots. Although rather like towing a trailer behind a Lamborghini.
Comment