Announcement

Collapse
No announcement yet.

Btrfs With Linux 5.10 Brings Some Sizable FSync Performance Improvements

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • F.Ultra
    replied
    Originally posted by waxhead View Post

    Are you using space_cache v2? I was under the impression that you just had to clear the v1 space cache and then enable the v2 cache , but this is not the case. It was a rather confusing and complex discussion on IRC a month or two back, but all I got out of it was that simply switching space cache was not that easy after all.

    Depending on how many storage devices you use and what kind of HBA's you use I would suggest rebalancing data to raid10. If I remember correctly there was patched posted a while ago (that I think was merged) that allowed btrfs' raid10 to potentially handle loosing more than one drive. If that is true you **may** have a slightly better chance surviving two dropped devices if you are both unlucky and lucky at once Of course you would need your metadata to be in raid10 or raid1c3 or raid1c4 to benefit from that.

    And just a quick heads up to everybody - BTRFS RAID terminology is not really RAID in the classical sense - close enough yes, but quite different still.
    No I'm not using space_cache (unless it's on by default), by cold cache I meant the Linux buffers cache. Strangely enough ls was fast today 24h later even though files have been added to the directories but I guess that Linux VFS simply cached that as well, the machine have 64GB free RAM after all. I have 24 SAS drives in that setup with a LSI 9207-8i as the HBA.

    Leave a comment:


  • F.Ultra
    replied
    Originally posted by piorunz View Post

    110TB with RAID1? Wouldn't you be better off with other RAID configuration than RAID1?
    BTRFS Raid1 is disconnected from Raid1, in BTRFS is just means that you have a duplicate of each COW-block

    Leave a comment:


  • waxhead
    replied
    Originally posted by F.Ultra View Post
    My only problem with BTRFS at the moment is that directories that contain more than a few thousand files takes 10-20s to list from cold cache (this on a BTRFS Raid1 system with 110T so could be a case specific problem).
    Are you using space_cache v2? I was under the impression that you just had to clear the v1 space cache and then enable the v2 cache , but this is not the case. It was a rather confusing and complex discussion on IRC a month or two back, but all I got out of it was that simply switching space cache was not that easy after all.

    Depending on how many storage devices you use and what kind of HBA's you use I would suggest rebalancing data to raid10. If I remember correctly there was patched posted a while ago (that I think was merged) that allowed btrfs' raid10 to potentially handle loosing more than one drive. If that is true you **may** have a slightly better chance surviving two dropped devices if you are both unlucky and lucky at once Of course you would need your metadata to be in raid10 or raid1c3 or raid1c4 to benefit from that.

    And just a quick heads up to everybody - BTRFS RAID terminology is not really RAID in the classical sense - close enough yes, but quite different still.

    Leave a comment:


  • pkese
    replied
    Originally posted by Snaipersky View Post
    Wouldn't raid 6 be better than 5 for I/O performance?
    On the same amount of physical disks, RAID6 would perform approximately the same at read and somewhat worse at write (i.e. needs to write parity to one more drive than RAID5).

    Leave a comment:


  • RussianNeuroMancer
    replied
    Originally posted by Snaipersky View Post
    Wouldn't raid 6 be better than 5 for I/O performance?
    It's doesn't matter anymore: https://www.zdnet.com/article/why-ra...rking-in-2019/

    Leave a comment:


  • Snaipersky
    replied
    Wouldn't raid 6 be better than 5 for I/O performance?

    Leave a comment:


  • pkese
    replied
    Originally posted by piorunz View Post

    110TB with RAID1? Wouldn't you be better off with other RAID configuration than RAID1?
    Depends how much I/O do you need from the array. RAID5 will get you only 20%-40% of the I/O performance compared to what RAID1 would do.

    Leave a comment:


  • piorunz
    replied
    Originally posted by F.Ultra View Post
    My only problem with BTRFS at the moment is that directories that contain more than a few thousand files takes 10-20s to list from cold cache (this on a BTRFS Raid1 system with 110T so could be a case specific problem).
    110TB with RAID1? Wouldn't you be better off with other RAID configuration than RAID1?

    Leave a comment:


  • F.Ultra
    replied
    My only problem with BTRFS at the moment is that directories that contain more than a few thousand files takes 10-20s to list from cold cache (this on a BTRFS Raid1 system with 110T so could be a case specific problem).

    Leave a comment:


  • piorunz
    replied
    That's fantastic. Over last few months, I've migrated /home (4TB) and /var (much smaller) to native raid1 on btrfs on my server. I am looking forward to migrate / (root) partition too.

    Leave a comment:

Working...
X