Announcement

Collapse
No announcement yet.

Linux 5.5 SSD RAID 0/1/5/6/10 Benchmarks Of Btrfs / EXT4 / F2FS / XFS

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • CochainComplex
    replied
    Originally posted by Paradigm Shifter View Post
    Ah, it's using a PERC controller. I relearned an important lesson recently: want RAID? Buy a dedicated card.

    I was attempting to experiment with RAID (on Linux, should be easy, right?) with a consumer X470 board. in the end I gave up. The "on board" RAID was terrible (and AMD appear to have removed their Linux drivers) so I tried software RAID... which was OK until every reboot when the array would fall apart and need to be rebuilt.
    well depending on the reliability you want to achieve. As mentioned by you but in a slightly different way. Consumergrade hw raid controller are not always really reliable. And i also dont know if rebuilding or accessing your data is straightforward once you have a broken hw controller. In such a case i would always prefer software over hw.

    concerning btrfs i would use the fs implemented "raid" config (if you want simple striped or mirror). So in this case you are on the software side and given by its structure you dont want to have an additional part screwing around in your config.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by xinorom View Post
    XFS isn't a CoW filesystem. A filesystem that lacks certain features (that imply bookkeeping overheads) is necessarily going to be faster, or at the very least easier to optimize. I don't know why so many people fail to understand that comparing the performance of a CoW filesystem to a non-CoW filesystem is not an apples-to-apples comparison.
    You close "XFS isn't a CoW filesystem" is wrong. XFS is a part CoW file system. Look at the relink part of XFS to see that.


    You do have copy on write behaviour inside XFS just not used all the time. Being part CoW file system means the XFS file system will drop back to direct writes protected by journal when something is not shared.

    Ext4 is currently not a CoW file system in any form but ext3cow of the past says that someone could make Ext4 another part CoW file system.

    Btrfs and ZFS are both full CoW file systems means they cannot skip out on the CoW overhead.

    Basically you have three types of file systems.

    Traditional. That has no copy on write functionality.
    Part CoW. That is able to selectively use Copy on Write when there is a preset reason todo so example XFS relinks.
    Full CoW. That all operations are done Copy on Write.

    The performance difference between traditional and Part CoW is very small. Feature list difference between a Part CoW and a Full CoW file-system can also get very small.

    Big thing a Part CoW file system will always be missing is transparent creation of snapshot as in. Part CoW have to be directed to create snapshots so you have to snapshot before modification to record what the modification was. Full CoW you can snapshot after modification depending on how much back history the full cow keeps due to the transparent creation. Lot of cases trading away this feature for speed not going to be a problem. Snapshot mid modification are more often useless than useful.

    So there is a really hard question do you really want a Full CoW file system or do you really want a well designed Part CoW file system.


    Leave a comment:


  • GreenReaper
    replied
    Nah, it's fair. I lost six hours of my user's new content to the btrfs committed transaction without writeback bug in 5.2. The only reason it wasn't more is the server's memory filled up with new data in that time and it finally froze writes. And the only reason that data wasn't all lost completely is that some of it had been served from memory to our caches - which use ext4 and mdadm - that had stored it safely.

    Sure, other filesystems have bugs. But this was a doozy and it happened just a few kernel revisions ago. Then there was that poor combination of btrfs send and delayed allocation which could lead to it not sending any data for inodes it hadn't written out yet, quietly corrupting snapshots. And neither of those are new features, nor was the bug itself in new code - it existed since btrfs send was merged.

    Btrfs can do a lot. Unfortunately this means it has a lot of bugs, especially when one component reacts unfavourably with another.
    Last edited by GreenReaper; 28 January 2020, 12:36 AM.

    Leave a comment:


  • xinorom
    replied
    Originally posted by profoundWHALE View Post
    All I know is that I cannot trust btrfs with my data.
    I can trust XFS, and ZFS, but not btrfs.
    Obvious troll is obvious. You can do better than that...

    Leave a comment:


  • profoundWHALE
    replied
    All I know is that I cannot trust btrfs with my data.

    I can trust XFS, and ZFS, but not btrfs.

    Leave a comment:


  • intelfx
    replied
    But really, there's something wrong with application startup times benchmarks.

    Leave a comment:


  • intelfx
    replied
    Originally posted by GreenReaper View Post
    Recommended by the way it's been used by Facebook and Synology - as checksumming and snapshot layer over the top of the block storage (including mdadm RAID in Synology's case). The btrfs project itself does not see RAID56 mode as stable.
    Using btrfs on top of traditional RAID is braindead — you lose integration between RAID and checksumming, which means this combination won't protect you from bitrot.

    It is a workaround to write hole, yes — but in no way this should be "recommended".
    Last edited by intelfx; 27 January 2020, 11:38 PM.

    Leave a comment:


  • Paradigm Shifter
    replied
    Ah, it's using a PERC controller. I relearned an important lesson recently: want RAID? Buy a dedicated card.

    I was attempting to experiment with RAID (on Linux, should be easy, right?) with a consumer X470 board. in the end I gave up. The "on board" RAID was terrible (and AMD appear to have removed their Linux drivers) so I tried software RAID... which was OK until every reboot when the array would fall apart and need to be rebuilt.

    Leave a comment:


  • GreenReaper
    replied
    Originally posted by xinorom View Post

    Loss of capacity as compared to what? The usual solution to the RAID5 write hole seems to be adding an extra drive for journaling, which also entails a "loss of capacity".

    Drives are cheap and RAID10 is usually the most bullshit-free approach.
    As compared to RAID10. I can deal with one or two files dying, just not the whole partition. So I plan to go raid1 for metadata and raid5 for data. Could even use raid1c3/4 now they implemented those, but it wouldn't make all that much sense in combination with raid5, just with raid6.

    Leave a comment:


  • xinorom
    replied
    Originally posted by GreenReaper View Post
    ​​​​We could go RAID10, but the loss of capacity and read performance (due to being further along the HDD performance curve) would be significant.
    Loss of capacity as compared to what? The usual solution to the RAID5 write hole seems to be adding an extra drive for journaling, which also entails a "loss of capacity".

    Drives are cheap and RAID10 is usually the most bullshit-free approach.
    Last edited by xinorom; 27 January 2020, 07:28 PM.

    Leave a comment:

Working...
X