Announcement

Collapse
No announcement yet.

GRUB Now Supports Btrfs 3/4-Copy RAID1 Profiles (RAID1C3 / RAID1C4 On Linux 5.5+)

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Zan Lynx View Post
    RAID1 is worse for parallel reads because the chunks are not adjacent. If you are using 64 KB chunks and 4 stripes then disk 1 reads [0-64K], [256K-320K], disk 2 [64K-128K], [320K-384K], etc, etc. You are wasting 1/4th of your drive read bandwidth.
    Got it - for traditional raid you are right , but in BTRFS case you have to remember that it is a filesystem and volume manager in once package. This means that in theory nothing (except the offset on disk) stops you from "stripe reading" from a 4 copy RAID1 *if* the disks are idle, or servicing other things based on priority such as writes and/or other reads since BTRFS is a filesystem that knows what file it reads.

    It all depends on the workload, for sequential read RAID10 usually is good performance wise, but for (BTRFS') RAID1c3/4 the filesystem could basically tune itself to boost either random read/write or sequential reads as it is free to choose what to use the disks for depending on the workload. E.g. it could read more data from one drive and less on another depending on workload or even disk speed.

    Of course BTRFS is not that advanced (yet) , but due to not being block layer RAID it has endless possibilities at least theoretically. If it ever will get features like this from a practical point of view remains to be seen. The same should (or could possibly) be true for similar filesystems such as bcachefs or ZFS for example.

    Yes I understand the problem about adjacent blocks, but again BTRFS is in a position where it can be free to choose optimization strategy based on what it knows. And if I am not mistaking most disks use address translation which does not necessarily maps logical addresses to physical addresses anyway so the chunks are probably not adjacent anyway which would introduce a potential small seek delay individually for each disk in the pool.



    http://www.dirtcellar.net

    Comment

    Working...
    X