Announcement

Collapse
No announcement yet.

Linux 5.14 SSD Benchmarks With Btrfs vs. EXT4 vs. F2FS vs. XFS

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    Originally posted by coder View Post
    Yes.

    Those of you using BTRFS, try this:

    sudo lsattr -d /var/lib/*

    For me, I get C on mariadb/, mysql/, and pgsql/. Distro: OpenSUSE. If you don't know what that means, read this:
    Directories that are installed as part of a system package can of course be set with +C out-of-the-box. But it's impossible to do that for something that the application will create on its own for each user at startup, such as Firefox's profile database files or a music player database etc.

    Comment


    • #52
      Originally posted by flower View Post

      many hdds and ssds use crc32 internally to verify data. as btrfs uses crc32 too its pretty useless.
      and there is still integritysetup - if you use an external drive for integrity it doesnt have ANY performance penality. i am using this in an raid10 setup for quite a while (checksum is sha256)
      Don't imply that disks internal csum is of value here. Most have them, but reality tells us it doesn't matter. There are several failures that can happen anyway.

      Some of them are :
      • Not honouring barriers
      • Lost cache on bus, disk or host bus resets
      • Fimware bugs
      • Powersave bugs
      • Bad controllers
      • Bad USB-SATA bridges
      • ...
      Even with enterprise hardware these things happen. Even on enterprise raid controllers these things happen and those controllers don't have methods to handle corrupt data from individual disks.
      Last edited by S.Pam; 28 August 2021, 02:09 AM.

      Comment


      • #53
        Originally posted by pal666 View Post
        he means database takes care of data integrity even without any filesystem(on raw block device). that's database's job
        As far as I know, most databases do not contain duplicate data to be able to repair itself. Even if MariaDB/InnoDB uses internal crc32 it does not mean it can correct any bad data, even if it can detect errors. Repairing databases with DB tools is time costly and a very different process than having Btrfs self-heal!

        Just to be clear. Even with nodatacow/nodatasum, you can use snapshots with Btrfs. You do loose the detect and self-heal features as well ass guaranteed of correct atomic state with the rest of the underlaying filesystem. nodatacow also affects the integrity guarantees on your backups too, unless you take specific measures to deal with it.

        So, while if certainly possible to build applications with internal backups, integrity and healing features, it is usually a lot trickier to manage and to get working with guarantees.

        With VMs, we certainly do not have this possibility on all setups. What if you run guests without support for advanced filesystems?

        IMHO Btrfs provides a sysadmin a really strong simple way to guarantee integrity, manageability and performance (recovery-time), which previously was very hard to achieve across the board.

        Last edited by S.Pam; 28 August 2021, 02:13 AM.

        Comment


        • #54
          Originally posted by sinepgib View Post
          Is there a way to compare the effects of each filesystem in SSD wear? That would be interesting. Specifically, I'd like to know how much of a negative effect does journaling have. It may be really bad or it may be nearly anecdotal.
          Last I checked, SSD wear is a non-issue on any recent SSDs, unless perhaps you're using the cheapest QLC you could find.

          Comment


          • #55
            Originally posted by S.Pam View Post

            Don't imply that disks internal csum is of value here. Most have them, but reality tells us it doesn't matter. There are several failures that can happen anyway.

            Some of them are :
            • Not honouring barriers
            • Lost cache on bus, disk or host bus resets
            • Fimware bugs
            • Powersave bugs
            • Bad controllers
            • Bad USB-SATA bridges
            • ...
            Even with enterprise hardware these things happen. Even on enterprise raid controllers these things happen and those controllers don't have methods to handle corrupt data from individual disks.
            True, but i still see no reason why i should use btrfs when i can have the same integrity level without performance impact with integritysetup.

            Comment


            • #56
              I'm still curious about CPU overhead of filesystems. You could compare kernel CPU time for that. Filesystems like btrfs have features that should eat up CPU (e.g. checksums) and I'm curious how big that is. It could also make a significant difference on mobile systems, like notebooks.

              Comment


              • #57
                Originally posted by curfew View Post

                No-one in their sane mind would enable COW for databases but of course that's exactly what he will do.
                I do with Optane and zfs, but I tune ashift and recordsize accordingly: http://www.linuxsystems.it/2018/05/o...t4-benchmarks/
                ## VGA ##
                AMD: X1950XTX, HD3870, HD5870
                Intel: GMA45, HD3000 (Core i5 2500K)

                Comment


                • #58
                  Originally posted by S.Pam View Post

                  Don't imply that disks internal csum is of value here. Most have them, but reality tells us it doesn't matter. There are several failures that can happen anyway.

                  Some of them are :
                  • Not honouring barriers
                  • Lost cache on bus, disk or host bus resets
                  • Fimware bugs
                  • Powersave bugs
                  • Bad controllers
                  • Bad USB-SATA bridges
                  • ...
                  Even with enterprise hardware these things happen. Even on enterprise raid controllers these things happen and those controllers don't have methods to handle corrupt data from individual disks.
                  Exactly, I catched lots of these problems thanks to btrfs. ext4 servers just died silently instead.
                  ## VGA ##
                  AMD: X1950XTX, HD3870, HD5870
                  Intel: GMA45, HD3000 (Core i5 2500K)

                  Comment


                  • #59
                    Originally posted by S.Pam View Post
                    Just to be clear. Even with nodatacow/nodatasum, you can use snapshots with Btrfs.
                    That's an half-truth: if you do snapshots frequently you're basically doing COW anyway, thus losing most of the performance benefits.
                    ## VGA ##
                    AMD: X1950XTX, HD3870, HD5870
                    Intel: GMA45, HD3000 (Core i5 2500K)

                    Comment


                    • #60
                      Originally posted by Keats View Post

                      Last I checked, SSD wear is a non-issue on any recent SSDs, unless perhaps you're using the cheapest QLC you could find.
                      SSD wear is a **VERY** big issue, especially on modern SSDs which are cheaper and can withstand less write cycles. Go tell this to Apple M1 users who trashed their SSD in less than 6 months just because of the swap and Apple refuses to provide warranty because SSDs are considered "wearable parts". Being Apple they are obviously soldered so you have to trash the whole Macbook instead of just the SSD
                      ## VGA ##
                      AMD: X1950XTX, HD3870, HD5870
                      Intel: GMA45, HD3000 (Core i5 2500K)

                      Comment

                      Working...
                      X