Announcement

Collapse
No announcement yet.

Linux 5.5 SSD RAID 0/1/5/6/10 Benchmarks Of Btrfs / EXT4 / F2FS / XFS

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    Originally posted by DrYak View Post
    Normally scrub should take a couple of hours max, and is something that needs to be performed on a regular basis to guarantee data safety.
    (I tend to run it weekly, monthly is about the min recommandations).
    This depends on amount of data and how fast the machine can handle reading.

    If having 12-14 TB of data on a drive and the machine manages about 150 MB / second then you get a total runtime of around 24 hours. With the larger drives, it's really important to make sure that the drives are connected to controllers that can handle the full transfer speed the disk supports. And even when using ATA-600, most drives are limited to 200-250 MB/second.

    So in the end, for really large drives it's often meaningful to split the scrub into multiple runs using cancel/resume instead of doing one huge scrub every x days.

    Comment


    • #62
      Something is wrong with these tests. F2FS is a log-structured file system, it always append data to the end and write sequentially. Because there is no random writes, F2FS on RAID5 should ALWAYS outperform single drive setup. This is not the case here, so I suggest to check testing procedure for bugs and external factors, cache, cpu throttling or whatever is spoiling the results. Try to run the test twice and check for reproducibility of the results at first.

      Comment


      • #63
        @profoundWHALEEven IronWolfs (both pro and regulars) 12-16TB got buggy firmware that caused corruption in reed-solomon based raids. Fixed firmware has been released, but remember that they are enterprise-grade drives. You can tell it's some rare 1st revision of firmware. But it's not (I got 7 drvies[mixed 12/14TB Pros and regulars from different batches, and half of them got buggy firmware on sticker). Have you considered that maybe firmware in HDDs or controller were buggy? It's not rare in enterprise and much more possible in consumer solutions. Remember about SATA power managment that caused BTRFS corruption on many boards?

        Comment

        Working...
        X