Announcement

Collapse
No announcement yet.

Linux 5.5 SSD RAID 0/1/5/6/10 Benchmarks Of Btrfs / EXT4 / F2FS / XFS

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by GreenReaper View Post
    It's ironic that it's in RAID56, where btrfs has received so much criticism, that it is competitive. (Fortunately this is also the area that I want to use it in next.)
    The solution to the RAID5 write hole is just to not use RAID5. The other solutions just make it even less efficient and performant than it already is and RAID5 is fairly shit to begin with. This issue is not unique to Btrfs.

    Originally posted by GreenReaper View Post
    The performance in single mode is disappointing ...
    The performance of these particular tests may be disappointing to some, but testing a CoW filesystem against 3 much simpler, less featured filesystems and then not doing a single test of features afforded by CoW semantics (e.g. reflink copies) seems kind of biased. Of course, if all you care about is braindead throughput benchmarks, you don't really even need a CoW filesystem.

    Originally posted by GreenReaper View Post
    ... given that this is the usage of it that is most-recommended.
    Recommended by whom? Most people who want a CoW filesystem most likely also want some kind of RAID.
    Last edited by xinorom; 27 January 2020, 06:43 PM.

    Comment


    • #12
      Recommended by the way it's been used by Facebook and Synology - as checksumming and snapshot layer over the top of the block storage (including mdadm RAID in Synology's case). The btrfs project itself does not see RAID56 mode as stable.

      ​​​​​​In our case we expect to use to store original media files, so the checksumming is important to us (in theory, that should be dealt with at the disk layer, but in practice...). As the data will essentially be written once and then read many times, some of the performance issues in the benchmarks here do not apply.

      ​​​​We could go RAID10, but the loss of capacity and read performance (due to being further along the HDD performance curve) would be significant.
      Last edited by GreenReaper; 27 January 2020, 06:53 PM.

      Comment


      • #13
        This is an apples to oranges comparison: Btrfs is running with a different scheduler and raid system (mq-deadline vs none, native vs md) than all the other filesystems. The different raid system is understandable (although they're not as equivalent as the names imply, and it would be interesting to compare native vs md on btrfs), but the different scheduler is perplexing.

        Comment


        • #14
          Originally posted by GreenReaper View Post
          ​​​​We could go RAID10, but the loss of capacity and read performance (due to being further along the HDD performance curve) would be significant.
          Loss of capacity as compared to what? The usual solution to the RAID5 write hole seems to be adding an extra drive for journaling, which also entails a "loss of capacity".

          Drives are cheap and RAID10 is usually the most bullshit-free approach.
          Last edited by xinorom; 27 January 2020, 07:28 PM.

          Comment


          • #15
            Originally posted by xinorom View Post

            Loss of capacity as compared to what? The usual solution to the RAID5 write hole seems to be adding an extra drive for journaling, which also entails a "loss of capacity".

            Drives are cheap and RAID10 is usually the most bullshit-free approach.
            As compared to RAID10. I can deal with one or two files dying, just not the whole partition. So I plan to go raid1 for metadata and raid5 for data. Could even use raid1c3/4 now they implemented those, but it wouldn't make all that much sense in combination with raid5, just with raid6.

            Comment


            • #16
              Ah, it's using a PERC controller. I relearned an important lesson recently: want RAID? Buy a dedicated card.

              I was attempting to experiment with RAID (on Linux, should be easy, right?) with a consumer X470 board. in the end I gave up. The "on board" RAID was terrible (and AMD appear to have removed their Linux drivers) so I tried software RAID... which was OK until every reboot when the array would fall apart and need to be rebuilt.

              Comment


              • #17
                Originally posted by GreenReaper View Post
                Recommended by the way it's been used by Facebook and Synology - as checksumming and snapshot layer over the top of the block storage (including mdadm RAID in Synology's case). The btrfs project itself does not see RAID56 mode as stable.
                Using btrfs on top of traditional RAID is braindead — you lose integration between RAID and checksumming, which means this combination won't protect you from bitrot.

                It is a workaround to write hole, yes — but in no way this should be "recommended".
                Last edited by intelfx; 27 January 2020, 11:38 PM.

                Comment


                • #18
                  But really, there's something wrong with application startup times benchmarks.

                  Comment


                  • #19
                    All I know is that I cannot trust btrfs with my data.

                    I can trust XFS, and ZFS, but not btrfs.

                    Comment


                    • #20
                      Originally posted by profoundWHALE View Post
                      All I know is that I cannot trust btrfs with my data.
                      I can trust XFS, and ZFS, but not btrfs.
                      Obvious troll is obvious. You can do better than that...

                      Comment

                      Working...
                      X