Announcement

Collapse
No announcement yet.

4-Disk Btrfs Native RAID Performance On Linux 4.10

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Zucca
    replied
    Originally posted by pal666 View Post
    heavy caching is done by page cache, zfs uses much more ram due to obsolete design
    Hm. Obsolote? Which "brach" of ZFS? OpenZFS, OracleZFS? (Are there more?)
    l also wonder how much (if any) there is performance gain/loss between those.

    I, personally have been using btrfs only, as it seems quite user friendly AND btrfs does not care a bit about different disk sizes and how many disk you give to it.
    For example I used 5xSSD setup a while ago. I think there was three different sized disks. btrfs managed to utilize around 90% of the space on RAID1 (and briefly on RAID5 and RAID6 too). I've head that ZFS on the other hand is more picky (enteprise users don't really care about that)... But still flexible when compared to "regular raid".

    Leave a comment:


  • stiiixy
    replied
    As far as I am aware, ZFS on Linux was still a FUSE-vased implementation, and would therefore not yield proper results compared to a native Solaris or at least BSD system.

    And why has no one here mentioned the one critical point about RAID1? Its only as fast as the slowest drive. You can have 100 drives, but it still only reads at that slowest drives speed. You also only have that smallest drives capacity limit as well. But, you'd have redundancy across 100 drives. You only start to get speed from RAID1 if you pair it with 0, and that is spread across the number of 0 arrays, but increases risk of lost data. Let's say we turned the 100 drives in to RAID10 with 50 drives in RAID1, the next fifty in RAID1, then we can put them in to a RAID0 array for twice the speed. Similarly with four arrays of 25 drives.

    Going by my testing the benefits of BTRFS simply weren't there compared to md on spinning rust (no SSDs). It was slow in RAID10, and 5 was slower and 6 was abyssmal. Native/MD was far superior with a choice of FS to boot, giving both speed and redundancy. If it takes less time on my drive to do that same thing, then I've won, as there's less wear and tear being performed on the systems resources. Since the BTRFS 5/6 storage issue popped up, I had to stay well clear, as we're using several drive arrays for primary storage and didn't feel like having to risk losing any of them and testing all the archived data all the time to be sure things worked.

    And I want BTRFS to be the FS it was promising. Just not yet for my needs.

    Leave a comment:


  • jacob
    replied
    Originally posted by starshipeleven View Post
    Can you clarify? What you mean?
    Let's say the FS wants to transfer a number of logically contiguous blocks (like an extent, for example). Normally it would occur as a single DMA operation, in burst mode. But if the physical blocks are scattered around, would that affect the transfer speed and/or max number of blocks transferred per request?

    Leave a comment:


  • starshipeleven
    replied
    Originally posted by jacob View Post
    That makes sense, thanks. Does it affect burst transfer performance though?
    Can you clarify? What you mean?

    Leave a comment:


  • starshipeleven
    replied
    Originally posted by Zucca View Post
    What I've heard is opposite, but that a while ago. At least ZFS uses much more RAM, thus giving the impression of heavy caching... that could yield to faster performance on certain situations.

    As there seems to be contradicting information among us, I'd really would like to see ZFS vs. btrfs comparison with memory usage. Also with and without a cache disk (some fast NVMe storage).
    Afaik ZFS tanks hard without decent amounts of RAM cache or a SSD used for cache. With RAIDZ-1 (RAID5 equivalent) or better, of course.

    But yeah, I would like to get some fair comparisons ZFS vs btrfs.

    Leave a comment:


  • jacob
    replied
    Originally posted by starshipeleven View Post
    Nope, ssds (and even SD cards nowadays) have wear-leveling so while for the block layer it is the same block, for the actual flash cell it's not.
    That makes sense, thanks. Does it affect burst transfer performance though?

    Leave a comment:


  • jacob
    replied
    Originally posted by pal666 View Post
    lol, how can you tell that without comparisons? zfs offers some of the features btrfs offers without existing for linux
    sudo apt install zfs

    Leave a comment:


  • pal666
    replied
    Originally posted by Zucca View Post
    At least ZFS uses much more RAM, thus giving the impression of heavy caching...
    heavy caching is done by page cache, zfs uses much more ram due to obsolete design

    Leave a comment:


  • pal666
    replied
    Originally posted by Spacefish View Post
    ZFS could be a nice comparison too, as it offers most of the featured btrfs offers without being that slow!
    lol, how can you tell that without comparisons? zfs offers some of the features btrfs offers without existing for linux

    Leave a comment:


  • pal666
    replied
    Originally posted by AsuMagic View Post
    RAID1 is data redundancy, RAID0 is data interlacing meant for performance.
    i know what they are, my question still stands. you can read from both drives of raid1 at the same time

    Leave a comment:

Working...
X