Announcement

Collapse
No announcement yet.

EXT4 Gets A Nice Batch Of Fixes For Linux 5.8

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by kreijack View Post

    In BTRFS a subvolume/snapshot are not required to be mounted explicitly. These have a natural placement in the filesystem. When I do a snapshot I set also where the snapshot is placed. And I can change this placement with a simple "mv" command (in this a subvolume is like a directory, you can also remove it with a simple "rm").

    In my setup, I put the subvolume which I use (/ , /boot, /debian ) in the fstab. Their snapshot are in another subvolume which is also in the fstab. I have few entries in the fstab.

    When I had to perform a rollback, I did something like:
    Code:
    # mv debian debian-broken
    # mv debian-snapshot-20200606 debian
    This without touching fstab. (yes I know that zfs can do with only one command...)
    In case of ZFS you can create snapshot by "mkdir </volume>/.zfs/snapshots/<snapshot_name>" and remove it by "rmdir </volume>/.zfs/snapshots/<snapshot_name>":. This is useful when volume is exported over NFS. In case of the btrfs it is not possible to have such functionality because all snapshots will be in another volume.

    As stated above, in BTRFS a subvolume haven't be in fstab. I put it in fstab when I found useful to have a "bind". Anyway I don't see any big advantage to have the mountpoint as subvolume property instead of to be in the fstab (nor the opposite). In any case you have to set it. If it is done as
    Code:
    echo >>fstab [.....]
    or
    Code:
    zfs set mountpoint...
    is the same thing. And about the moving, I don't see any problem to cut&paste several lines between different fstab..
    Look. In case ZFS you don't need to feedle at all in fstab. On Solaris /etc/vfstab does not contain even sigle entry because "zfs import --all" commands executed on boot stage from miniroot (intrd) import/mounts all volumes using metadata embedded in each volumes metadata.

    I am not a snapper fan. However it does a lot more than zfs-auto-snapshot: e.g. it support xfs/ext4 + lvm< it has some form of integration with yast; it support snapshot at login time... It is unfair to compare these two tools.
    Because btrfs does not provides adding custom settings into pool/volume/snapshot matadata all necessary snapshot metadata must be stored in regular files. ZFS as well procvides automatic inheritance of those settings by subvolume from parent volume. This is externally useful.
    Embedding metadata into volumes allows apply vols ACLs (not files ACLs), NSF/SMB sharing, or delegations in those metadata without using any files.
    When you will export pool and import it on another host (for example in active/standby cluster) all mount settings will be moved WITH that pool.
    This is why using btrfs is so hard in such active/standby clusters setups.

    I know that certain aspect of zfs are a lot better than btrfs (e.g. storage tiering and raid5/6). In other aspect BTRFS is superior (you can add, replace, remove device and reshape the filesystem easily). One limitation which seems strange to me is that ZFS doesn't support reflink, and also shrinking the filesystem has some limits.

    IMHO these aspects are more valuable than the btrfs/zfs user command...
    As long as btrfs is still classic FS which is using allocations structures and not like ZFS free lists (whatever is not on zpool free list is allocated) btrfs still will be only wanna-be ZFS.. Only this an nothing more.
    ZFS has integrated caching which is 100% designed for ONLY ZFS. Using btrfs you must relay oon LVM/MD caching on bcachefs. All those technologies are not designed for btrfs and this is why ZFS beats btrfs in literally ALL tests/benchmarks.
    Only that one difference that ZFS is using free lists and that each block of data has ctime is causing that for example on making decision about keep block as allocated or not on remove snapshot in the middle ZFS needs TWO compares (on compare of the ctime of the bloc with next snapshot ctime and second one to compare block ctime with older snapshot).
    ZFS uses internally the same technology which on Solaris and than On Linux started been used which is SLAB allocator. This is why adding to zpool new disk is like adding new DIMM of RAM. On such operation you don't need to reformat or repartition you RAM because new RAM is just added to free list. Try compare btrfs pool creation time with ZFS. pool creation. In case of btrfs you must create allocation matadata. In case of the ZFS new disk is just added to free list.

    This is why snapshots operations in case of the btrfs are slower and slower with size of the allocated data in btrfs pool) when on ZFS that overhead is const whatever size of the of the zfs pool is and this is why ZFS snapshots operations are 100% deterministic operations.

    Don't get me wrong. It is good that Linux has at least one OOTB fs with pooled storage, with COW and snapshots however design decisions made on btrfs development will allays put that FS waaaay behind ZFS.

    ZFS is well integrated with boot providing OOTB support for boot environment where all metadata about separated cloned rootfs volumes are stored as well in volumes metadata.
    btrfs still needs fsck and cannot autoreplace faulted disk out of that which is added to the pool as spare disk(s).

    And try to think that with all what I've already described here I'm still not in the 50% of the functionalities which is still not possible to see in case of the btrfs.
    Really try to use ZFS because it will be kind of eye opener

    Comment


    • #42
      Originally posted by DanL View Post
      Call me old school, but I have my own backup methods.
      your backup methods can't work reliably in absence of atomic snapshots
      Originally posted by DanL View Post
      For my use case, I'll take the performance of ext4 over the fancy features of btrfs and zfs.
      if you have performance issues with btrfs, you are doing it wrong

      Comment


      • #43
        Originally posted by Old Grouch View Post
        Re: snapshots. I like the NILFS approach. Every COW operation is a potential snapshot. There's an issue with NILFS at the moment that generates kernel crashes on mounting NILFS volumes under certain circumstances, which means that I'm stuck on an old kernel, as it hits me 100% of the time on booting with newer kernels. I have an 'unusual' set up. So while the implementation has a problem right now, I like the approach. The COW journal loops, so I can read-only mount the filesystem to any point in the past covered by the journal: I don't need to take snapshots at regular intervals, although you can. Snapshots are then preserved from overwriting, but if you take too many, you run out of disk space.
        I have no idea what it is NILLFS however if that FS still is using allocation structures and not like ZFS free lists still it will not be able to beat ZFS on any snapshots operations.

        Comment


        • #44
          Originally posted by kloczek View Post
          Part of the ZFS code is for example shadowfs which allows hardware upgrade (moving from old set of devices to brand new) like petabytes pool in matter of the seconds In other words almost the same size of the source and machine code provides functionalities which you've never seen on Linux.
          you can easily add/remove devices to/from mounted btrfs on the fly. meanwhile you can't change size of zfs, which is ridiculous for something calling itself filesystem
          Last edited by pal666; 06 June 2020, 12:37 PM.

          Comment


          • #45
            Originally posted by kloczek View Post
            Using btrfs you have something like this using snapper but is crazy complicated
            using btrfs all you need is setuid helper. using zfs all you need is different operating system, which is crazy complicated indeed.

            Comment


            • #46
              Originally posted by kloczek View Post
              Call me the same. I'm using ZFS from more than 15 years.
              if you aren't illiterate, last 11 of those 15 passed with knowledge that design of zfs is obsolete

              Comment


              • #47
                Originally posted by kloczek View Post
                As long as btrfs is still classic FS
                as long as zfs doesn't use btrees, it can't be considered real file system

                Comment


                • #48
                  Originally posted by kloczek View Post
                  Because btrfs is forcing to mount snapshot somewhere
                  if someone is forcing you, call police. you don't have to mount btrfs snapshots. zfs zealots look like flat earthers

                  Comment


                  • #49
                    Originally posted by kreijack View Post
                    Code:
                    $(seq 10000)
                    Code:
                    {1..1000}
                    is much better

                    Comment


                    • #50
                      Originally posted by Almindor View Post

                      Thanks for taking the time to list all these. What I'm getting in the end is that ZFS is a pretty complicated (overcomplicated IMO) filesystem that seems to try to solve problems that should IMHO be solved on non-FS layers. I don't think a typical desktop setup would benefit that much from running it.

                      Snapshotting is probably the most interesting aspect. Given the Linux kernel drama tho, I'll be staying with Ext4 for the foreseeable future.
                      For a non esoteric setup BTRFS is quite stable (I used it from ~2009) . The worst thing that could happen is low performance for some workload. I.e. doing an "apt update" on a BTRFS filesystem stored in an HDD is slow.

                      However there are some workaround that could increase the performance.

                      Comment

                      Working...
                      X