Announcement

Collapse
No announcement yet.

Linux 5.14 SSD Benchmarks With Btrfs vs. EXT4 vs. F2FS vs. XFS

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • darkbasic
    replied
    Originally posted by S.Pam View Post

    Don't imply that disks internal csum is of value here. Most have them, but reality tells us it doesn't matter. There are several failures that can happen anyway.

    Some of them are :
    • Not honouring barriers
    • Lost cache on bus, disk or host bus resets
    • Fimware bugs
    • Powersave bugs
    • Bad controllers
    • Bad USB-SATA bridges
    • ...
    Even with enterprise hardware these things happen. Even on enterprise raid controllers these things happen and those controllers don't have methods to handle corrupt data from individual disks.
    Exactly, I catched lots of these problems thanks to btrfs. ext4 servers just died silently instead.

    Leave a comment:


  • darkbasic
    replied
    Originally posted by curfew View Post

    No-one in their sane mind would enable COW for databases but of course that's exactly what he will do.
    I do with Optane and zfs, but I tune ashift and recordsize accordingly: http://www.linuxsystems.it/2018/05/o...t4-benchmarks/

    Leave a comment:


  • brent
    replied
    I'm still curious about CPU overhead of filesystems. You could compare kernel CPU time for that. Filesystems like btrfs have features that should eat up CPU (e.g. checksums) and I'm curious how big that is. It could also make a significant difference on mobile systems, like notebooks.

    Leave a comment:


  • flower
    replied
    Originally posted by S.Pam View Post

    Don't imply that disks internal csum is of value here. Most have them, but reality tells us it doesn't matter. There are several failures that can happen anyway.

    Some of them are :
    • Not honouring barriers
    • Lost cache on bus, disk or host bus resets
    • Fimware bugs
    • Powersave bugs
    • Bad controllers
    • Bad USB-SATA bridges
    • ...
    Even with enterprise hardware these things happen. Even on enterprise raid controllers these things happen and those controllers don't have methods to handle corrupt data from individual disks.
    True, but i still see no reason why i should use btrfs when i can have the same integrity level without performance impact with integritysetup.

    Leave a comment:


  • Keats
    replied
    Originally posted by sinepgib View Post
    Is there a way to compare the effects of each filesystem in SSD wear? That would be interesting. Specifically, I'd like to know how much of a negative effect does journaling have. It may be really bad or it may be nearly anecdotal.
    Last I checked, SSD wear is a non-issue on any recent SSDs, unless perhaps you're using the cheapest QLC you could find.

    Leave a comment:


  • S.Pam
    replied
    Originally posted by pal666 View Post
    he means database takes care of data integrity even without any filesystem(on raw block device). that's database's job
    As far as I know, most databases do not contain duplicate data to be able to repair itself. Even if MariaDB/InnoDB uses internal crc32 it does not mean it can correct any bad data, even if it can detect errors. Repairing databases with DB tools is time costly and a very different process than having Btrfs self-heal!

    Just to be clear. Even with nodatacow/nodatasum, you can use snapshots with Btrfs. You do loose the detect and self-heal features as well ass guaranteed of correct atomic state with the rest of the underlaying filesystem. nodatacow also affects the integrity guarantees on your backups too, unless you take specific measures to deal with it.

    So, while if certainly possible to build applications with internal backups, integrity and healing features, it is usually a lot trickier to manage and to get working with guarantees.

    With VMs, we certainly do not have this possibility on all setups. What if you run guests without support for advanced filesystems?

    IMHO Btrfs provides a sysadmin a really strong simple way to guarantee integrity, manageability and performance (recovery-time), which previously was very hard to achieve across the board.

    Last edited by S.Pam; 28 August 2021, 02:13 AM.

    Leave a comment:


  • S.Pam
    replied
    Originally posted by flower View Post

    many hdds and ssds use crc32 internally to verify data. as btrfs uses crc32 too its pretty useless.
    and there is still integritysetup - if you use an external drive for integrity it doesnt have ANY performance penality. i am using this in an raid10 setup for quite a while (checksum is sha256)
    Don't imply that disks internal csum is of value here. Most have them, but reality tells us it doesn't matter. There are several failures that can happen anyway.

    Some of them are :
    • Not honouring barriers
    • Lost cache on bus, disk or host bus resets
    • Fimware bugs
    • Powersave bugs
    • Bad controllers
    • Bad USB-SATA bridges
    • ...
    Even with enterprise hardware these things happen. Even on enterprise raid controllers these things happen and those controllers don't have methods to handle corrupt data from individual disks.
    Last edited by S.Pam; 28 August 2021, 02:09 AM.

    Leave a comment:


  • curfew
    replied
    Originally posted by coder View Post
    Yes.

    Those of you using BTRFS, try this:

    sudo lsattr -d /var/lib/*

    For me, I get C on mariadb/, mysql/, and pgsql/. Distro: OpenSUSE. If you don't know what that means, read this:
    Directories that are installed as part of a system package can of course be set with +C out-of-the-box. But it's impossible to do that for something that the application will create on its own for each user at startup, such as Firefox's profile database files or a music player database etc.

    Leave a comment:


  • curfew
    replied
    Originally posted by flower View Post

    many hdds and ssds use crc32 internally to verify data. as btrfs uses crc32 too its pretty useless.
    and there is still integritysetup - if you use an external drive for integrity it doesnt have ANY performance penality. i am using this in an raid10 setup for quite a while (checksum is sha256)
    BTRFS supports a few different checksum algorithms. Just a few weeks ago I actually "upgraded" to XXHASH. BTRFS would also support your precious SHA256 as well.

    Leave a comment:


  • curfew
    replied
    Originally posted by S.Pam View Post
    You mean people that value data integrity?
    If that's a meaningful parameter for you, then you obviously have only one choice for a filesystem and these benchmarks are trash for you. Or then you worry about performance with equal feature-sets and these benchmarks are trash for you.

    Leave a comment:

Working...
X