Announcement

Collapse
No announcement yet.

Linux 5.14 SSD Benchmarks With Btrfs vs. EXT4 vs. F2FS vs. XFS

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    Originally posted by S.Pam View Post
    Don't imply that disks internal csum is of value here. Most have them, but reality tells us it doesn't matter. There are several failures that can happen anyway.

    Some of them are :
    • Not honouring barriers
    • Lost cache on bus, disk or host bus resets
    • Fimware bugs
    • Powersave bugs
    • Bad controllers
    • Bad USB-SATA bridges
    • ...
    Even with enterprise hardware these things happen. Even on enterprise raid controllers these things happen and those controllers don't have methods to handle corrupt data from individual disks.
    Most of these bugs can screw BTRFS, too, depending on just how bad they hit you!

    And BTRFS RAID is no magic bullet:

    Comment


    • #62
      Originally posted by Keats View Post

      Last I checked, SSD wear is a non-issue on any recent SSDs, unless perhaps you're using the cheapest QLC you could find.
      How come? Isn't it still a consequence of flash technology that it wears on erasure?

      Comment


      • #63
        Originally posted by sinepgib View Post

        How come? Isn't it still a consequence of flash technology that it wears on erasure?
        Yes you are correct. But large drives are better at wear levelling and have much higher TBW.

        Endurance is approximately:

        SLC=100k writes
        MLC=10k writes
        TLC=3k writes
        QLC=1K writes

        Comment


        • #64
          It looks like F2Fs be the best solution for SSD and I assume USB as well.

          Comment


          • #65
            Response to Eumaios

            My comment were not directed at Michael but that c
            omment was directed to all of the comments about others about why their computer was slow.

            Comment


            • #66
              How come that F2FS improved so much? It's a kernel 5.14 thing?

              Comment


              • #67
                Originally posted by sandy8925
                So .......btrfs on hard drive for home/personal server use cases should be fine right?
                Yes.

                If your disk is /dev/sdb1 then use:
                Code:
                # mkfs.btrfs -R free-space-tree -L my-btrfs-disk /dev/sdb1
                To mount:
                Code:
                # mount LABEL=my-btrfs-disk /mnt/my-btrfs-root -o noatime,subvolid=5
                # btrfs subvolume create /mnt/my-btrfs-root/@home
                # mount LABEL=my-btrfs-disk /home -o noatime,subvol=@home
                Some other useful commands are

                Lists all known Btrfs filesystems on the system:
                Code:
                # btrfs filesystem show
                Label: 'btrfs-root' uuid: 446d32cb-a6da-45f0-9246-1483ad3420e0
                Total devices 1 FS bytes used 89.60GiB
                devid 1 size 229.47GiB used 99.06GiB path /dev/sda3
                
                Label: 'boot' uuid: b1ae03e7-e6c2-4efe-90ec-57a54e296e2e
                Total devices 1 FS bytes used 35.64MiB
                devid 1 size 1.00GiB used 342.38MiB path /dev/sda2
                
                Label: 'usb-backup' uuid: df68a30d-d26e-4b9c-9606-a130e66ce63d
                Total devices 1 FS bytes used 685.68GiB
                devid 1 size 927.51GiB used 721.02GiB path /dev/sde1
                Show disk space usage and allocation (instead of using df tool):
                Code:
                 # btrfs filesystem usage -T /mnt/rootvol/
                Overall:
                Device size: 229.47GiB
                Device allocated: 99.06GiB
                Device unallocated: 130.41GiB
                Device missing: 0.00B
                Used: 92.03GiB
                Free (estimated): 134.23GiB (min: 69.03GiB)
                Free (statfs, df): 134.23GiB
                Data ratio: 1.00
                Metadata ratio: 2.00
                Global reserve: 382.42MiB (used: 0.00B)
                Multiple profiles: no
                
                Data,single: Size:91.00GiB, Used:87.18GiB (95.80%)
                /dev/sda3 91.00GiB
                
                Metadata,DUP: Size:4.00GiB, Used:2.42GiB (60.63%)
                /dev/sda3 8.00GiB
                
                System,DUP: Size:32.00MiB, Used:16.00KiB (0.05%)
                /dev/sda3 64.00MiB
                
                Unallocated:
                /dev/sda3 130.41GiB
                Show logged error count
                Code:
                # btrfs device stats /mnt/rootvol/
                [/dev/sda3].write_io_errs 0
                [/dev/sda3].read_io_errs 0
                [/dev/sda3].flush_io_errs 0
                [/dev/sda3].corruption_errs 0
                [/dev/sda3].generation_errs 0
                Scrub your disk regularly (monthly or so is enough)
                Code:
                #btrfs scrub start /mnt/rootvol/
                scrub started on /mnt/rootvol/, fsid 446d32cb-a6da-45f0-9246-1483ad3420e0 (pid=8175)
                
                # btrfs scrub status /mnt/rootvol/
                UUID: 446d32cb-a6da-45f0-9246-1483ad3420e0
                Scrub started: Sun Aug 29 13:40:26 2021
                Status: running
                Duration: 0:00:45
                Time left: 0:02:20
                ETA: Sun Aug 29 13:43:32 2021
                Total to scrub: 92.03GiB
                Bytes scrubbed: 22.36GiB (24.30%)
                Rate: 508.87MiB/s
                Error summary: no errors found
                Enable compression on some files/folders (only newly written data will be compressed)
                Code:
                # btrfs property set <dir-or-file> compression zstd
                One-time compression of existing files.
                Code:
                # btrfs filesystem defragment -czstd -vr <dir-or-file>
                Use compress mount option
                Code:
                # mount -o compress=zstd
                # mount -o compress=zstd:5
                # mount -o compress=lzo
                # mount -o compress (this is zlib)
                # mount -o compress-force=zstd
                # mount -o compress-force=zstd:15
                Reference: https://wiki.tnonline.net/w/Category:Btrfs
                Last edited by S.Pam; 29 August 2021, 07:50 AM.

                Comment


                • #68
                  Originally posted by pal666 View Post
                  he means database takes care of data integrity even without any filesystem(on raw block device). that's database's job
                  Only if enabled though and AFAIK both MySQL/MariaDB and PostgreSQL have it disabled as default.

                  Comment


                  • #69
                    Originally posted by ojab View Post
                    Can I haz the same tests, but with btrfs nodatacow added?
                    what's the use of btrfs if you disable snapshots and checksumming, use xfs/ext4 then
                    it would be better to test all FS with also ZFS , with noatime and in a raid 1 config as you allways want to have at least raid1 security and who run atime in a cow file system ?

                    best regards.

                    Comment


                    • #70
                      Originally posted by gadnet View Post

                      what's the use of btrfs if you disable snapshots and checksumming, use xfs/ext4 then
                      it would be better to test all FS with also ZFS , with noatime and in a raid 1 config as you allways want to have at least raid1 security and who run atime in a cow file system ?

                      best regards.
                      You still get snapshots and all the other things with btrfs. Even reflinks works with nodatacow.

                      ​​​

                      Comment

                      Working...
                      X