Announcement

Collapse
No announcement yet.

ZFS/Zsys Code Seeing Important Performance Fix Ahead Of Ubuntu 20.04 LTS

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by k1e0x View Post

    Just trying to explain how it is. In enterprise nobody cares what filesystem is on your desktop, they care what the storage array is so ZFS vs BTRFS isn't really a thing.. it's ZFS vs NetApp vs Dell EMC. The kind of discussion where people talk about petabyte density per rack.. BTRFS unfortunately isn't in that discussion.

    However SuSE is a fine distro and it is perfectly good to use for a desktop or workstation. (SLES is the enterprise version)
    Another piece of BS. BTRFS has the same integrity checks as ZFS, that is checksums for all data and metadata.
    ZFS is rock solid on Solaris, very emphatically not on Linux nor FreeBSD.

    Comment


    • #12
      Originally posted by StarterX4 View Post
      Just for curiosity – did anybody test how ZOL's ZFS stability and it's recovery compares to BTRFS? Especially in cases like power loss.
      There are a lot of factors here that make it difficult to determine the data lost when the power goes out. If the drives are caching (and in some cases lying about it to the OS) or writes are cached at some other layer (because the application doesn't play nice for example) you can still lose data - you must be diligent at all levels here. What doesn't happen is total loss, you usually get back to a consistent state, but perhaps not the most recent.

      I had one major data loss event with ZFS (degraded pool on a virtual machine), but it also saved my arse at least once (detecting corruption on a disk that didn't report it was broken).

      Comment


      • #13
        Originally posted by k1e0x View Post

        See:
        https://documentation.suse.com/sles/...a-snapper.html



        I noticed it can't rollback /boot either? odd.. It has a whole section about things excluded from the rollback too. I noticed Ubuntu does restore /boot on rollback so it can undo kernel changes. SuSE appears not to be able to do that? Maybe it's a layout thing or they can't boot off it? idk.

        ZFS can rollback everything. You can delete entire datasets (or volumes) and restore them.

        https://www.freebsd.org/cgi/man.cgi?...l&sektion=&n=1
        I suppose that's a limitation from SuSE's subvolume implementation. "snapper rollback" simply rolls back to an earlier snapshot of the root subvolume. If you keep multiple subvolumes for other directories like /home, then you would need to roll them back separately. On my personal computers I keep /home on the root subvolume since I'm the only user and I don't use that for file storage. I do keep separate subvolumes for /var/log and /var/cache. That way on a rollback I retain my logs, and I don't waste lots of space by keeping old cache files in snapshots. The idea of rolling back absolutely everything on the entire volume is not really the most efficient way of handling it, as some things simply don't need to roll back.

        Now as far as /boot goes, on an EFI system that needs to be FAT32, which does present an issue for complete system rollback. I handle that by using a Pacman hook which copies /boot/ to /.bootbackup/ if a change is made in that folder. That way the snapshots would include the contents of the /boot/ folder. If I want to rollback to a snapshot with an earlier kernel version, I just need to make sure I copy the boot files as well.

        Comment


        • #14
          Originally posted by Chugworth View Post
          I suppose that's a limitation from SuSE's subvolume implementation. "snapper rollback" simply rolls back to an earlier snapshot of the root subvolume. If you keep multiple subvolumes for other directories like /home, then you would need to roll them back separately. On my personal computers I keep /home on the root subvolume since I'm the only user and I don't use that for file storage. I do keep separate subvolumes for /var/log and /var/cache. That way on a rollback I retain my logs, and I don't waste lots of space by keeping old cache files in snapshots. The idea of rolling back absolutely everything on the entire volume is not really the most efficient way of handling it, as some things simply don't need to roll back.

          Now as far as /boot goes, on an EFI system that needs to be FAT32, which does present an issue for complete system rollback. I handle that by using a Pacman hook which copies /boot/ to /.bootbackup/ if a change is made in that folder. That way the snapshots would include the contents of the /boot/ folder. If I want to rollback to a snapshot with an earlier kernel version, I just need to make sure I copy the boot files as well.
          Ubuntu's layout and Zsys, from what I've looked at it creates a /boot pool but does not roll back /home nor /var/log. So (by default) It can undo a kernel change, but it keeps user data.

          Comment


          • #15
            As someone who have been using Opensolaris from 2009 with ZFS Boot Environments (BE) , and later using it on Openindiana, both with GRUB and later as 'illumos loader' (FreeBSD loader for illumos),
            I must be only happy for OpenZFS development with Linux distributions on boot loader (GRUB) and needed systems, even I am not sure how portable work is.

            I have been using Btrfs also with Ubuntu and still now on boot drive, with apt-btrfs-snapshot installed, but I need to manually delete old snapshots and commands are truly much more cubersome to use then with OpenZFS. I also keep all my important data on mirrored OpenZFS.
            I truly miss automatic way of dealing with snapshots that is not present on Linux, like in Opensolaris and Openindiana and OmniOS, that have a service that is making sure that older snapshots are automatically deleted, inside "Time-slider" feature.

            I must correct the article in a sense that BEs and datasets are independent file systems, not only snapshots. There is a distinction between "snapshots" (that are read-only on ZFS by default) and "datasets" (file systems) that can continue to be used as separate systems.

            I need to underline that everything needed to boot the system is inside BE. If one needs separate datasets for system folders, they are usually created under BE's dataset and mounted where needed accordingly.
            Some systems like TrueOS do not follow BE management and create separate datasets for system folders, that can't be managed by BE on the same system.
            If Linux distros could follow BE way of managing systems, Linux, illumos and FreeBSD could be booted from the same ZFS pool and from their respective BE off fom OpenZFS.

            When you upgrade BE on illumos/OpenZFS, (Openindiana, Omnios, Dilos etc.) , if kernel is upgraded, then upgrade process is done on "clone dataset" inside new BE and is available upon reboot, where current working BE remains intact.
            important thing is that with BE's , one can continue to use every option in boot menu , therefore, every BE is separate file system e.g. one can have how many like descending BEs with different configurations, while requiring only disk space for changed data, files and settings because ZFS clones use CopyOnWrite to retain data.
            This all is basically in production since ZFS introduction since 2006 . (Together with Solaris zones , crossbow virtual networking and once even had reboot-less live kernel patching)

            Also take a notice to distinguish between OpenZFS and ZFS.
            OpenZFS is multiplatform CDDL Copyleft licensed , with much features that are distinguishing it, like the transparent encryption and many other "Feature flags" , unlike proprietary Solaris-only Oracle's proprietary ZFS.
            Last version that could is supported on both is ZFS version 28 (zpool create -o version=28) and OpenZFS moved past versions, with Feature flags.

            Don't listen to people who say that "ECC RAM is not important". If you value your data and business, no matter what file system you use faulty or temporary faulty RAM can leave you in despair and (as I tested) no filesystem checksuming , mirroring and replication won't help you if your data is corrupted in RAM before saving it to hard storage. (I tried it by giving bad RAM to ZFS and surely as any file ssytem it' can't help with faulty RAM).
            And yes, Only OpenZFS and XFS (and some others), do not require to do checkdisk on boot and ZFS can do 'scrub' operation in the background. (beside it is transaction-based and always consistent on-disk), It is really is no-brainer why that is a must in production use where boot time is important.
            Last edited by Markore; 04-15-2020, 06:46 AM.

            Comment


            • #16
              I saw that Ubuntu was shipping with ZFS and had this vision of those guys in Boston screaming and breaking stuff. The plan a few years ago was to get Oracle to 'just' change the license.

              Comment


              • #17
                Originally posted by YamashitaRen View Post
                It is a shame ZFS is only supported in the (Gnome) Ubuntu installer.
                Worse, it requires you to kill your whole disk before making ZFS partitions. There is no ZFS support in advanced partitioning.
                Too bad, I was really interested in trying ZFS on the computer I am currently using. (Hanging from time to time, which requires hard reboot.) Will stay on OpenSUSE for now.
                ZFS itself makes it trivial to do a clean install on say a USB key and mirror that back to where you want it. Not for everyday users though, but I guess they would struggle with partitioning as well as this often involves shrinking existing partitions etc. Try it in a VM. ZFS attach/detach is the solution.

                Comment


                • #18
                  Originally posted by jacob View Post

                  Another piece of BS. BTRFS has the same integrity checks as ZFS, that is checksums for all data and metadata.
                  ZFS is rock solid on Solaris, very emphatically not on Linux nor FreeBSD.
                  Come on man... It still doesn't even do RAID5 or 6. After more than a decade. https://btrfs.wiki.kernel.org/index.php/Status

                  I wanted to love btrfs, I used it a few years ago (single disk, no RAID) and lost my data to it (due to a btrfs bug) to be met with the notorious "your fault, it is clearly marked experimental".

                  Comment

                  Working...
                  X