Originally posted by Chugworth
View Post
Announcement
Collapse
No announcement yet.
Ubuntu 19.10 Indeed Working On "Experimental ZFS Option" In Ubiquity Installer
Collapse
X
-
Originally posted by gorgone View Posthopefully ubuntu will not screw up again
Comment
-
Originally posted by smitty3268 View Post
Except they aren't distributing it with Linux. They're letting people install it separately, just like they already do with nvidia's proprietary drivers. Doubt they'd be doing it that way if they didn't think they had to../lib/modules/4.15.0-29-generic/kernel/zfs/zfs/zfs.ko
Comment
-
Originally posted by starshipeleven View PostZFS and btrfs have the same basic handling, they don't "fsck", they "scrub" to fix the filesystem or do so when you read the damaged file (and they have undamaged redundancy for it)
fsck tool for btrfs for example is NOT supposed to be used to fix the filesystem unless instructed by a developer, as it's there mostly to fix issues caused by bugs.
ZFS has a similar tool called "zdb", which again is more of a development tool than a fsck equivalent.
Both btrfs and ZFS in a default single-disk format will have fully redundant metadata, so in case of metadata corruption the filesystem can recover itself.
By default neither will be able to save the data with scrub as they don't have any redundancy. Since they are CoW filesystems though, you won't get this with an unclean shutdown (pulling the plug) and similar.
If you want full protection also from random data corruption (bit rot) you have to set them to have full data redundancy in the drive (and accept to double the size of everything as now it will be written twice)
For btrfs is "btrfs balance -dprofile=dup \path\to\mount\point"
for ZFS it's "zfs set copies=2 zvol_name_here"
Although on a SSD this may not be good enough because the SSD controller will see that you are writing the same stuff and may (or may not, SSDs are black boxes and can do these things while conventional hard drives don't) try to map both redundant blocks to the same physical area, in this case any issue that causes corruption to one will corrupt the other too.
Comment
-
Originally posted by starshipeleven View PostZFS memory requirements aren't significant unless you are doing a RAID, and even then it's not huge unless you enable caches and deduplication (which do matter for serious arrays).
ECC RAM only protects from bit flips in the RAM which are a very rare event, most of the bit rot comes from storage controller or other system errors that have nothing to do with RAM and are much more frequent than RAM bitflips.
So while it's indeed recommended to have it in a NAS or storage server where the whole point is data storage, on a client device it's much less of a requirement.Last edited by ryao; 04 July 2019, 03:03 PM.
Comment
-
Originally posted by phoenix_rizzen View PostYou can actually compile the ZFS bits direct into the FreeBSD kernel now, no modules required. Any tools developed around booting come with ZFS support (and boot environments added automatically to the loader menu, EFI booting support, yadda yadda). So yes, it's definitely more integrated into FreeBSD, contrary to the OP I was responding to.
It certainly would be funny if OpenZFS ended up working better on Windows than Linux. I was actually amazed they got it working, with drive letter support, even. Definitely alpha quality right now, but interesting nonetheless. Actually, it'd be even funnier if OpenZFS-on-Windows worked better for a home file server than Microsoft's home server (with Storage Spaces?) that they abandoned after 2 releases.
Comment
-
Originally posted by starshipeleven View PostAh crap I forgot that ZFS users on Lunix (tm) use a 128k block size.
L2ARC eats 400bytes per block so the total consumption depends from the block size.
On FreeNAS the default is 16k or 8k, for each 100GB of SSD L2ARC you need like 2.5GB or 5GB respectively. That's kind of significant. With 128k block size it's negligible.
Comment
-
If anyone is interested, I just found out you can support the maintainer of ZFS on Mac & Windows, Jörgen Lundman, on Patreon: https://www.patreon.com/lundman.
- Likes 1
Comment
-
Originally posted by starshipeleven View PostAh crap I forgot that ZFS users on Lunix (tm) use a 128k block size.
L2ARC eats 400bytes per block so the total consumption depends from the block size.
On FreeNAS the default is 16k or 8k, for each 100GB of SSD L2ARC you need like 2.5GB or 5GB respectively. That's kind of significant. With 128k block size it's negligible.
Comment
Comment