Announcement

Collapse
No announcement yet.

Btrfs Has Many Nice Improvements, Better Performance With Linux 5.11

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #71
    I have my concerns about COW and SSDs/M.2 devices. From what I understand about the latter devices, they do their own COW. Every so often (weekly?) or more often, one has to run a fstrim -A. What I do right now, is run a fstrim -A before doing a btrfs filesystem defrag, followed by a follow up fstrim -A after the defrag completes.

    I WOULD LIKE TO BE ABLE TO DO A MOUNT /DEV/XXX /mnt and then run the btrfs filesystem defrag /mnt. The concern I have is about ownership of the btrfs blocks. Does anyone know if it is safe to do what I want via /mnt, using a partition at a time.

    Comment


    • #72
      Originally posted by duby229 View Post
      Important data, so btrfs.... Just wait until an important file gets corrupted somehow, even if it was natural phenomena or by some program, and then the balance operation spreads the corruption...
      This is apparently your imagination. I've had two hard drives fail in my NAS btrfs array since 2012. One of those failures was something strange in the drive cache because it never reported read errors, but just returned zeros for some blocks, so yay for checksums because MD RAID would have failed on that one. I've rebalanced the entire array between RAID 1 and RAID 10 and expanded it from two drives to six over time. Each time more drives were added or failed drives were replaced, it got rebalanced again. And of course I've done small rebalance operations to clean up partially used disk chunks.

      It has never lost any data, and balance has never spread corruption.

      Really, the only time I have heard of that happening with any filesystem is when someone has bad RAM or a failing CPU. I've read stories of entire ZFS arrays being destroyed by RAM errors. People claim you don't need ECC. Sure, you usually get lucky and it is some sort of single bit error but it depends on what kind of RAM error you end up with. Some failure types are a lot worse than others.

      And some Google searching revealed a few btrfs balance operation bugs which did corrupt the metadata. Back in 2013. They were fixed.

      Comment


      • #73
        Originally posted by lsatenstein View Post
        I have my concerns about COW and SSDs/M.2 devices. From what I understand about the latter devices, they do their own COW. Every so often (weekly?) or more often, one has to run a fstrim -A. What I do right now, is run a fstrim -A before doing a btrfs filesystem defrag, followed by a follow up fstrim -A after the defrag completes.

        I WOULD LIKE TO BE ABLE TO DO A MOUNT /DEV/XXX /mnt and then run the btrfs filesystem defrag /mnt. The concern I have is about ownership of the btrfs blocks. Does anyone know if it is safe to do what I want via /mnt, using a partition at a time.
        The cow of SSDs is due to the erase block size. The flash media can't overwrite a small portion of an erase block. It has two ways to deal with this. 1) Read the entire block to cache, Modify the needed bits, Write back the entire block. 2) write the changes in another,Empty block.

        Option 2 is much faster. But in order for the SSD to know what blocks are empty you need to do fstrim (aka discard).

        The CoW in btrfs shouldn't impact this behaviour.

        Btrfs does not have partitions, but only subvolumes. They are more like a special type of directory or a bind mount point (mount --bind).

        If you do want to run a recursive operation across all files in all subvols you can indeed mount the top level subvol in /mnt and use that. In fact, it is part of the Btrfs design to do so. I use this method to take backups.

        Defragmenting files on SSD isn't that wise as it increases the wear, shortening its life span.

        Comment


        • #74
          He could also use some non-Linux os that does not get broken for the hell of it or because Corporate has set unrealistic deadlines, leaving less time for checking. Or some linux LTS distro.

          Comment

          Working...
          X