Announcement

Collapse
No announcement yet.

Linux 4.4 To 4.7 - EXT4 vs. F2FS vs. Btrfs Benchmarks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by leonmaxx View Post
    Btrfs's random write speed better than sequential write speed.
    Something is wrong in this life.
    Btrfs never does actually sequential writes by designs.
    It's a cow (copy on write system) it never overwrites data blocks, it always write a new separate block and updates pointers to he new block (and eventually garbage collects the old block).
    All the features (cheap snapshotting, etc) are based around it.
    But it fragments the filesystem.
    (Happens specially on big often overwitten files: VMs and DBs
    Zfs can autodetect it and disable cow
    Btrfs can manually disable cow with chattr +C )

    The interesting information here is that F2FS ( which also non overwiting, it's a log structured fs) manages to keep performance.

    Comment


    • #22
      Originally posted by starshipeleven View Post
      Btrfs's RAID1 is more similar to classic RAID5 than classic RAID1, even if it isn't neither.
      To be precise, according to official docs, btrfs's raid1 always keeps exactly 2 copies of all data on any 2 different block devices among all in the pool.

      So 3 devices of mdadm : 3 copies of the data, one on each drives, you can lose 2 drives. You got 1x the total size.
      And 3 devices of btrfs: 2 copies of data scattered on the drives, you can lose 1 drive. You have 1.5x the total space (more or less, depends if all the drive have the same size).

      (With btrfs, you can have 1go + 512mb + 512mb, and raid1 will correctly work, with btrfs supposed to correctly balance the free space, thus always putting one copy one the 1gb drive, and the second copy in whichever of the other 512mb drives)

      Comment


      • #23
        Keeping with this:
        Raid0, raid5 and raid6 on mdadm: each stripe is always as large the number of drives.
        3 drive raid0 : data split in 3, please never try losing a drive (total space is n drives)
        3 drive raid5 : data split in 2 + 1 parity, you can lose one drive (total space is n-1 drives)
        3 drive raid6 : data spkit in 1, has 2 parities sonyouncan lose 2 drives (total space is n-2 drives)

        Raid0,5,6 on btrfs: according to doc, for now it's not possible to configure the strip size, it's always going to be "as wide as the number of available blockdevices at the current moment"
        If you have 3 drives of the exact same size, it more or less works like mdadm.
        If you have 3 drives of different sizes and use raid0:
        - the first files will be split in three until the smallest drive fills up
        - the next files will be split in two until the next drive fill up
        - the last files will be written as is (stripe size 1) on the last free drive

        Plans are to be able to configure raid stripe size.
        So either to ask more than 2 copies in Raid-1 if there more drives.
        Or to ask stripe size of 2 (file split in halfs) in raid-0,5,6 even if there 3+ drives (which enable balanced filling of missmatched drives size like my 1gb+2*512mb exempl)
        Docs don't give an ETA

        Also note that, unlike Zfs's raid-z1,z2 and unlike btrfs raid1, if checksumming fails on btrfs (eg data corruption instead of lost drive), btrfs isn't able to use the parity to guess which drive is corrupted and rebuild a checksum-passing file. It's on their todo list.
        Docs don't give eta for this feature.

        Raid1 works with checksumming on btrfs. If a checksum fails, btrfs can guess which raid1 copy is genuine and which is corrupted. (Write-hole is closed)

        Comment


        • #24
          Hi Michael, can you integrate BcacheFS on the next benchmark, i'm very interested in the results of that FS, thanks.

          Comment


          • #25
            Originally posted by oleid View Post
            When doing the RAID test, please include btrfs in single disk mode on mdraid. I found this combination more reliable than using btrfs' internal mirroring.
            He should seriously test the btrfs's scrub. It is horribly slow, the place where all the FAIL lies in btrfs.

            Comment


            • #26
              btrfs always the worst, and a lot of regressions since kernel 4.4

              Comment


              • #27
                Originally posted by ObiWan View Post
                On UEFI with systemd-boot or refind it's as easy as any FS (as /boot is the FAT32 EFI partition)

                Only with GRUB there are problems with a F2FS /boot as long as it doesn't support it out of the box
                Nor "systemd-boot", nor "refind" support directly F2FS; they work only because there is a FAT32 EFI partition. But if an additional partition is an option, even grub works well !

                Comment


                • #28
                  Originally posted by kreijack View Post
                  Nor "systemd-boot", nor "refind" support directly F2FS; they work only because there is a FAT32 EFI partition. But if an additional partition is an option, even grub works well !
                  Yep, the issue is lack of native efi F2FS driver to use in Grub/refind/whatever. Once someone makes one, it will be shared by all.

                  Comment


                  • #29
                    I would like to see how XFS compared to the rest in this test.

                    Comment


                    • #30
                      Originally posted by DrYak View Post

                      Btrfs can manually disable cow with chattr +C )
                      or at mount time via the fstab entry.

                      Comment

                      Working...
                      X