EXT4 Gets A Nice Batch Of Fixes For Linux 5.8

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts
  • pal666
    Senior Member
    • Apr 2013
    • 9177

    #51
    Originally posted by kloczek View Post
    In case ZFS you don't need to feedle at all in fstab.
    it's true of any filesystem. fstab is used only by mount command with partial arguments. more flat earth stories
    Originally posted by kloczek View Post
    Because btrfs does not provides adding custom settings into pool/volume/snapshot matadata all necessary snapshot metadata must be stored in regular files.
    lol, suddenly "everything is a file" is a bad thing?
    Originally posted by kloczek View Post
    Try compare btrfs pool creation time with ZFS. pool creation.
    so did you try it, or you are just substituting measurements with vivid imagination? my btrfs pools are created instantly

    Comment

    • pal666
      Senior Member
      • Apr 2013
      • 9177

      #52
      Originally posted by kloczek View Post
      btrfs command syntax is crazy complicated . In case zfs all pool operations are done using zpool command and all volume operations using zfs. Syntax is so well designed that you can guess what you need to give as parameter.
      commands are not filesystem. they are just applications. you could write your own with syntax of your liking

      Comment

      • pal666
        Senior Member
        • Apr 2013
        • 9177

        #53
        Originally posted by kreijack View Post
        6) easily of growing/shrinking of filesystem (depends by point #4) [with or without adding/ removing devices]
        zfs can't pass this test

        Comment

        • pal666
          Senior Member
          • Apr 2013
          • 9177

          #54
          Originally posted by kloczek View Post
          My first distribution was SLS
          which didn't exist in 1991

          Comment

          • pal666
            Senior Member
            • Apr 2013
            • 9177

            #55
            Originally posted by kloczek View Post
            I said every snapshot in case of btrfs must be mounted ex plicit
            you said bullshit

            Comment

            • pal666
              Senior Member
              • Apr 2013
              • 9177

              #56
              Originally posted by Almindor View Post
              Snapshotting is probably the most interesting aspect. Given the Linux kernel drama tho, I'll be staying with Ext4 for the foreseeable future.
              there's no kernel drama for btrfs, no need to punish yourself with snapshotless life

              Comment

              • kreijack
                Senior Member
                • May 2015
                • 203

                #57
                Originally posted by kloczek View Post
                Try compare btrfs pool creation time with ZFS. pool creation. In case of btrfs you must create allocation matadata. In case of the ZFS new disk is just added to free list.

                This is why snapshots operations in case of the btrfs are slower and slower with size of the allocated data in btrfs pool) when on ZFS that overhead is const whatever size of the of the zfs pool is and this is why ZFS snapshots operations are 100% deterministic operations.
                What are you writing is not true. The creation on a BTRFS filesystem is fast, very fast. The same is adding a disk. The same creating a snapshot. All these operations require writing a small amount of data which doesn't depend nor by the size of the available disks nor by the size of filesystem.

                Creation of a 1.7PB btrfs filesystem
                Code:
                $ time sudo mkfs.btrfs -mraid6 -draid6 /dev/loop[0-9]*
                btrfs-progs v5.6
                See http://btrfs.wiki.kernel.org for more information.
                
                Label: (null)
                UUID: c5e81d28-984a-40ea-bd90-6ed7b4ba3f6d
                Node size: 16384
                Sector size: 4096
                Filesystem size: 1.64PiB
                Block group profiles:
                Data: RAID6 9.52GiB
                Metadata: RAID6 1.25GiB
                System: RAID6 40.00MiB
                SSD detected: yes
                Incompat features: extref, raid56, skinny-metadata
                Checksum: crc32c
                Number of devices: 42
                Devices:
                ID SIZE PATH
                1 40.00TiB /dev/loop0
                2 40.00TiB /dev/loop1
                ...
                
                real 0m21.830s
                user 0m0.103s
                sys 0m0.421s

                Adding a 50TB disk
                Code:
                $ time sudo btrfs dev add /dev/loop42 /mnt/other/
                
                real 0m0.562s
                user 0m0.007s
                sys 0m0.025s
                Regarding the data used
                Code:
                $ sudo ./btrfs fi us /mnt/other/
                [sudo] password for ghigo:
                Overall:
                Device size: 1.69PiB
                Device allocated: 11.35GiB
                Device unallocated: 1.69PiB
                Device missing: 0.00B
                Used: 2.95MiB
                Free (estimated): 1.61PiB (min: 1.61PiB)
                Data ratio: 1.05
                Metadata ratio: 1.05
                Global reserve: 3.25MiB (used: 0.00B)
                Multiple profiles: no
                
                Data,RAID6: Size:9.52GiB, Used:2.69MiB (0.03%)
                /dev/loop0 243.75MiB
                [...]
                Metadata,RAID6: Size:1.25GiB, Used:112.00KiB (0.01%)
                /dev/loop0 32.00MiB
                System,RAID6: Size:40.00MiB, Used:16.00KiB (0.04%)
                /dev/loop0 1.00MiB
                Unallocated:
                /dev/loop0 40.00TiB
                To manage a 1.6PB, BTRFS wrote 2.69MB+112KB+16KB = ~3MB of data

                (the last output is done by the devel btrfs-progs)

                Comment

                • kreijack
                  Senior Member
                  • May 2015
                  • 203

                  #58
                  Originally posted by pal666 View Post
                  Code:
                  {1..1000}
                  is much better
                  Thanks !!!

                  Comment

                  • kloczek
                    Senior Member
                    • Feb 2020
                    • 162

                    #59
                    Originally posted by pal666 View Post
                    you can easily add/remove devices to/from mounted btrfs on the fly. meanwhile you can't change size of zfs, which is ridiculous for something calling itself filesystem
                    1) I've told about autoreplace failed vdev in the pool in case when you have spare disk in the btrfs pool.
                    btrfs still does not provide add such spare devices to the pool

                    2) ZFS can change on the fly any vdev in the pool. If yopu will set in the zpool autoexpand=on if device LuN size will change ZFS automatically will change zpool size.
                    https://docs.oracle.com/cd/E19253-01...ifk/index.html
                    You can change as well single vdev from smallerr to bigger one and wit h zpool
                    autoreplace=on ZFS automatically will recognise wtaht new device has changed WWN and will start resilvering process. All without executing even single command.

                    3)) You can change zpool size by add or rfemove vdev from zpool as well. This is now supported by regular ZFS from Oracle Solaris and OpenZFS which code is using ZoL.

                    4) ZFS is file system and storage management in single layer. Because it is single layer ZFS code is able to read over SCSI commands devices zones and by this assign edges to use by write logs, translation logs and root blocks to speedup whole pool. This allow as well such tricks that if you will replace one of the disks in mirrored root pool ZFS automatically will not only resilver zpool data but UEFI/boot block. All without touching grub commands.

                    And btrfs when it is used with whole disks is trying now to minic ZFS behavior because it make sense to integrate volume manabement with block devices layer management.
                    Ahat is still missing in case of btrfs is abilituy to put on/of state exact devoices in the pool. This allow for example off device in mirror -> upgrade disk firmware -> put sisk in on state and sync only taransaction which shoul be written on that disk (instead doing whole disk resilvering)

                    Comment

                    • kloczek
                      Senior Member
                      • Feb 2020
                      • 162

                      #60
                      Originally posted by pal666 View Post
                      if you aren't illiterate, last 11 of those 15 passed with knowledge that design of zfs is obsolete
                      Yep .. and this is why today Linux needs today desperately needs ZSF because NONE of the Linux solutions can provide what ZFS has provided in his first official Solaris version.
                      You made my day dude

                      If you will go back to the time when Solaris 10 was released you will find that before making whole Solaris code open sourced under CDDL license first two chunks published Solaris code was DTrace and ZFS. Sun gave chance to copy/mimic ZFS in own implementation. *BSD took that code and within ~year they had first implementation integrated in FreeBSD. Linux choose its normal NIH syndrome way (NIH -> Not Invented Here) and eveb today bpftrace still is not Linux DTrace and btrfs because under neath is not using free list still is not able to provide 20-20% of the ZFS initial version functionalities. In mean time was almost 50 incremental changes. BTW btrfs still has no internal versioning which would allow something like "btrfs upgrade <pool>". Inc case ZFS upgrade pool takes usually only few seconds because whole upgrade is only about change format of some small metadata data and that operation can be done under heavy load.
                      Just look on wiki page to see what is still missing on any non ZFS file system https://en.wikipedia.org/wiki/ZFS

                      Comment

                      Working...
                      X