Announcement

Collapse
No announcement yet.

Ubuntu 19.10 Indeed Working On "Experimental ZFS Option" In Ubiquity Installer

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by Chugworth View Post
    Hopefully Ubuntu's prominent use of ZFS will help to dispel the silly notion that licensing prevents it from being distributed with Linux.
    Except they aren't distributing it with Linux. They're letting people install it separately, just like they already do with nvidia's proprietary drivers. Doubt they'd be doing it that way if they didn't think they had to.

    Comment


    • #32
      Originally posted by gorgone View Post
      hopefully ubuntu will not screw up again
      To this and other comments, lay off the trash talking of Ubuntu. Anyone that has actually FOLLOWED the mailing list, announcement, etc. would realize the press sensationalized it. Valve only read the sensationalism, and further spread "fake news". The only thing from the BEGINNING that Canonical stated was that 32 bit packages would be frozen and maintained under the last LTS release. Nothing less, nothing more. No statements were really changed until public outrage. Prior to this, if you dig deeper, you'd discover that 32-bit packages weren't really maintained at all to begin with. In other words, Canonical was just making something official that has been happening for a while now. Wine32 would have continued working (it does NOT come from Canonical), various other apps would have continued working, etc. The only statement officially made was that Ubuntu would not support new 32 bit libraries nor applications. I don't blame them for this, as even the Raspberry Pi and various other boards support 64 bit, and as no-one has bothered to "donate to the cause" as it were, the Ubuntu team has to focus their priorities. If you used 19.10, you'd notice that there are very few differences from 19.04. Wine still works, both in 32 and 64 bit mode (and it has since 19.10 was a thing), 32 bit apps still work, etc. The issue was one of messaging, not of anything software related. Half the people here trash talk Ubuntu, yet, according to distrowatch, the majority of users run a distro...based on Ubuntu. I'll avoid the obscenities, but lay off. Half of you guys wouldn't even be running Linux if it weren't for Ubuntu, and distrowatch proves that much.

      Comment


      • #33
        Originally posted by smitty3268 View Post

        Except they aren't distributing it with Linux. They're letting people install it separately, just like they already do with nvidia's proprietary drivers. Doubt they'd be doing it that way if they didn't think they had to.
        You don't think so, huh? I just extracted the "filesystem.squashfs" file from from an Ubuntu 18.04 server installation image, and look at what's sitting in the path below:
        ./lib/modules/4.15.0-29-generic/kernel/zfs/zfs/zfs.ko

        Comment


        • #34
          Originally posted by starshipeleven View Post
          ZFS and btrfs have the same basic handling, they don't "fsck", they "scrub" to fix the filesystem or do so when you read the damaged file (and they have undamaged redundancy for it)

          fsck tool for btrfs for example is NOT supposed to be used to fix the filesystem unless instructed by a developer, as it's there mostly to fix issues caused by bugs.
          ZFS has a similar tool called "zdb", which again is more of a development tool than a fsck equivalent.

          Both btrfs and ZFS in a default single-disk format will have fully redundant metadata, so in case of metadata corruption the filesystem can recover itself.

          By default neither will be able to save the data with scrub as they don't have any redundancy. Since they are CoW filesystems though, you won't get this with an unclean shutdown (pulling the plug) and similar.

          If you want full protection also from random data corruption (bit rot) you have to set them to have full data redundancy in the drive (and accept to double the size of everything as now it will be written twice)
          For btrfs is "btrfs balance -dprofile=dup \path\to\mount\point"
          for ZFS it's "zfs set copies=2 zvol_name_here"

          Although on a SSD this may not be good enough because the SSD controller will see that you are writing the same stuff and may (or may not, SSDs are black boxes and can do these things while conventional hard drives don't) try to map both redundant blocks to the same physical area, in this case any issue that causes corruption to one will corrupt the other too.
          zdb is a read only tool. ZFS tends to be resilient enough that it avoids the issues where a repair tool would be used on other file systems and self heals / reports damage to files that other filesystems would ignore. If ZFS gets damaged enough to need manual repairs, any other filesystem would likely be a complete loss and the chances of an expert being able to fix it are low (say 50%).

          Comment


          • #35
            Originally posted by starshipeleven View Post
            ZFS memory requirements aren't significant unless you are doing a RAID, and even then it's not huge unless you enable caches and deduplication (which do matter for serious arrays).

            ECC RAM only protects from bit flips in the RAM which are a very rare event, most of the bit rot comes from storage controller or other system errors that have nothing to do with RAM and are much more frequent than RAM bitflips.

            So while it's indeed recommended to have it in a NAS or storage server where the whole point is data storage, on a client device it's much less of a requirement.
            Raidz/mirroring makes little difference for memory requirements. The pain point is deduplication where unless you have large amounts of memory for cache, write performance will go south for any significant amount of unique data unless you are using an absurdly large record size. I have enabled deduplication with good performance on a system that I built for an elderly friend that had 32GB of storage and 8GB of RAM (which I made ECC mostly because I could). It had been a small form factor machine that used a sand force based thumb drive for storage. The deduplication slightly increased performance because storage was throttled by a USB 2.0 bus. I also configured LZ4 compression, which helped even more.
            Last edited by ryao; 04 July 2019, 03:03 PM.

            Comment


            • #36
              Originally posted by phoenix_rizzen View Post
              You can actually compile the ZFS bits direct into the FreeBSD kernel now, no modules required. Any tools developed around booting come with ZFS support (and boot environments added automatically to the loader menu, EFI booting support, yadda yadda). So yes, it's definitely more integrated into FreeBSD, contrary to the OP I was responding to.

              It certainly would be funny if OpenZFS ended up working better on Windows than Linux. I was actually amazed they got it working, with drive letter support, even. Definitely alpha quality right now, but interesting nonetheless. Actually, it'd be even funnier if OpenZFS-on-Windows worked better for a home file server than Microsoft's home server (with Storage Spaces?) that they abandoned after 2 releases.
              If you build from source you can compile zol into the Linux kernel instead of a module too.

              Comment


              • #37
                Originally posted by starshipeleven View Post
                Ah crap I forgot that ZFS users on Lunix (tm) use a 128k block size.

                L2ARC eats 400bytes per block so the total consumption depends from the block size.

                On FreeNAS the default is 16k or 8k, for each 100GB of SSD L2ARC you need like 2.5GB or 5GB respectively. That's kind of significant. With 128k block size it's negligible.
                For a client device 100gb sounds like way overkill. Remember l2arc is not persistent so it needs to repopulate on each boot. People with servers sometimes run initialization code to bring stuff into the arc for this reason. You might be better off running lvmcache underneath zfs instead.

                Comment


                • #38
                  Originally posted by nivedita View Post
                  For a client device 100gb sounds like way overkill.
                  I said "serious arrays" above when I talked of "caches" (and meaning L2ARC).

                  Comment


                  • #39
                    If anyone is interested, I just found out you can support the maintainer of ZFS on Mac & Windows, Jörgen Lundman, on Patreon: https://www.patreon.com/lundman.

                    Comment


                    • #40
                      Originally posted by starshipeleven View Post
                      Ah crap I forgot that ZFS users on Lunix (tm) use a 128k block size.

                      L2ARC eats 400bytes per block so the total consumption depends from the block size.

                      On FreeNAS the default is 16k or 8k, for each 100GB of SSD L2ARC you need like 2.5GB or 5GB respectively. That's kind of significant. With 128k block size it's negligible.
                      L2ARC header size has been 70 bytes for a few years now. Get with the times.

                      Comment

                      Working...
                      X