Announcement

Collapse
No announcement yet.

Btrfs File-System Mount Option Testing From Linux 3.14

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by curaga View Post
    It depends how the program is written. Most programs cause full overwrites. I'd be very surprised if btrfs (or any other fs for the matter) expended the cpu cycles to detect changes - think 30GB video files and the lag it would cause.
    Thanks, being software dependent makes sense. Do you know how the Btrfs compression algorithm works? The linked wiki says compression is skipped on files if the first compressed portion doesn't yield a smaller size. I couldn't find any info on how existing files are processed. So, if a program specifically only changes part of a file where on a non-compressed filesystem, only that part is modified, will Btrfs recompress the entire file and rewrite it? Then again, if most programs rewrite entire files then my worry is probably mostly useless and is just a what if scenario.

    I ask, because some years back, I read a Microsoft Technet or blog article about NTFS compression. It works on independent compression blocks. If a program is only modifying a part of a file and it is within a single compression block then only that block will be modified and recompressed. If it spans multiple blocks then the entire file or a chunk larger than the modified amount are recompressed and rewritten to disk. For the (few) programs that only modifies ranges of bytes, there could actually be more write accesses or, possibly, more data written to disk compared to NTFS compression being disabled.

    Comment


    • #17
      Originally posted by guido12 View Post
      Thanks, being software dependent makes sense. Do you know how the Btrfs compression algorithm works? The linked wiki says compression is skipped on files if the first compressed portion doesn't yield a smaller size. I couldn't find any info on how existing files are processed. So, if a program specifically only changes part of a file where on a non-compressed filesystem, only that part is modified, will Btrfs recompress the entire file and rewrite it? Then again, if most programs rewrite entire files then my worry is probably mostly useless and is just a what if scenario.
      Go ask on #btrfs on freenode and then tell us what you've heard here. They're the ones that are experts at it, after all

      Comment


      • #18
        Originally posted by curaga View Post
        It depends how the program is written. Most programs cause full overwrites. I'd be very surprised if btrfs (or any other fs for the matter) expended the cpu cycles to detect changes - think 30GB video files and the lag it would cause.
        i do not know how btrfs handles this, but this statement above is in generally wrong!

        while it is true that if you open a file in a common editor, modify it and then save it back the whole file is usually rewrtitten. That is because most editors read the whole file into the memory and then save it back.

        though this is not the case usually for file modifying apps that are NOT editors (or editors able to handle big files, like video editors), like it is usually with log file writers. here it will be seeked whithin files. modifications only affect the block that is being modified and maybe the follow up block(s).

        if btrfs is compressing blockwise (which would make the most sense imho) then btrfs would only need to recompress the affected blocks.

        as conclusion:
        regarding performance relevant scenarios i wuld expect that only modified parts will be recompressed because editing file in an editor is not an action happening 100 times per second and thus not a performance relevant case.

        Comment


        • #19
          Originally posted by endman View Post
          In 2 years time, no one will hear anything about ZFS.
          Thanks for giving me the laugh! Good one!

          Comment


          • #20
            Originally posted by renkin View Post
            It would be interesting if we can see some differences between defaults vs 16k leaf size vs skinny metadata. Though I think 16k was already moved to default.
            You're right, DEFAULT_MKFS_LEAF_SIZE is finally 16k.

            And I agree, nodesize and skinny-metadata are the tunables I acually care about, everything else has sensible defaults or is a no-brainer (compression & noatime).

            Comment

            Working...
            X