Announcement

Collapse
No announcement yet.

Approved: Fedora 33 Desktop Variants Defaulting To Btrfs File-System

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • starshipeleven
    replied
    Originally posted by kloczek View Post
    So what is wrong with fragmentation?
    lower (effective) IO? You know how fragmentation works?

    The problem is that a "block", just like with an SSD memory cell or SMR zone, is a monolithic thing and I cannot just read parts of it, while a block is also big enough that it can contain parts from multiple files.
    If the data I need is split inside 400 blocks or if it is split inside 4000, or 40000 the performance changes because now I need to read more blocks to get the same data.

    In my entire career using ZFS since Solaris 10 beta (~19y) I never ever had needs to defragment anything.
    Not saying this is a major issue like it is for trash-grade filesystems like NTFS, just saying that having something to deal with it if it happens, that does not involve copying terbytes of data to an external drive would be nice. Because it still can happen, for some workloads.

    Try it than we can talk ..
    Can you lend me a Slowlaris license? Because if the only way to test your claims is to pay Oracle for a businness license I'm not doing that, I'm too stingy.

    Leave a comment:


  • starshipeleven
    replied
    Originally posted by kloczek View Post
    You are talking about some OpenZFS issues.
    None is using Slowlaris, when everyone talks of ZFS they really mean OpenZFS now, old man.

    Againa as ZFS is using modified SLAB allocator defragmentation is not needed.
    I find it hard to believe that a core filesystem feature like that was changed only in the relatively recent past, in the proprietary ZFS fork that none uses because it's on Slowlaris.

    Leave a comment:


  • kloczek
    replied
    Originally posted by intelfx View Post
    Yeah, no. That's a complete and utter lie.
    Just please at least try to google for "zfs slab allocatior" then read first few links.

    Leave a comment:


  • kloczek
    replied
    Originally posted by starshipeleven View Post
    Dunno, it still shrinks dramatically the space used, if the data can be deduped.
    Also it's ONLINE even if it isn't done trasparently. OFFLINE means that the filesystem is unmounted, not even crap like NTFS needs to be unmounted to be deduplicated (on servers where you can actually deduplicate it)

    All CoW filesystems can become fragmented if large files are edited constantly (like VMs and databases). SLAB isn't magic. I'm not talking of NTFS level of bullshit instant-fragmentation, but will still eventually happen.

    Btrfs has at least autodefrag, so it will automatically deal with it. Not that it matters much for databases or VMs because its performance with such workloads is complete garbage, but it's ok for anything else.
    So what is wrong with fragmentation?
    Are you aware of very simple facts that most of the read workloads are random and only very small number of those workloads are sequential?
    What happens when distribution of the blocks is random and you are reading randomly? Mostly nothing .. you have ~the same level of randomness on reads.
    In my entire career using ZFS since Solaris 10 beta (~19y) I never ever had needs to defragment anything.
    Other thing which is reducing any fragmentation impact is proper caching of the data and metadata. Here ZFS is superior As that caching is not generic one but well designed for only ZFS it provides one of the highest effectiveness of that caching ever in the entire history of all operating systems.
    Really you would know all that if you would at least have been using ZFS. As long as you never been using this conversation is like talk about some food with someone who never had opportunity to taste that food.
    Try it than we can talk ..

    Leave a comment:


  • kloczek
    replied
    Originally posted by pal666 View Post
    by lowering available memory(much more precious resource). it's not hard to implement, it's just not very useful, that's the only reason it's not implemented yet for btrfs. and it's called inband, online means "without unmount"
    You are writing that without using even one time ZFS.
    Shame.
    Intuition completely misguides everyone when it comes to things above some level of complexity.

    Leave a comment:


  • kloczek
    replied
    Originally posted by starshipeleven View Post
    bullshit https://forum.proxmox.com/threads/ho...fs-pool.42931/
    Hey guys.. This is somewhat of a question, issue, and "viability of a feature request" all rolled into one. First , ZFS is awesome, and I use it to host VMs. I am looking seriously into ZFS as a so...

    Automation of snapshotting a zfs pool and sending it to S3, destroy the pool, recreate, and import the snapshot in order to fix ZFS Fragmentation - salesforce/zfs_defrag


    https://www.kernel.org/doc/html/late...compact-memory
    echo 1 > /proc/sys/vm/compact_memory
    It's not commonly needed as only some specific workloads need it, but yes sometimes it's necessary.
    You are talking about some OpenZFS issues.
    Againa as ZFS is using modified SLAB allocator defragmentation is not needed.

    Leave a comment:


  • johnvardas
    replied
    Originally posted by starshipeleven View Post
    Dropbox added again btrfs and all other noteworthy filesystems after a few months https://help.dropbox.com/installs-in...m-requirements

    A Dropbox folder on a hard drive or partition formatted with one the following file system types:
    • ext4
    • zfs (on 64-bit systems only)
    • eCryptFS (back by ext4)
    • xfs (on 64-bit systems only)
    • btrfs

    Oh, well too bad I am using f2fs , they definitely don't support it.

    Leave a comment:


  • King InuYasha
    replied
    Originally posted by AdamW View Post

    That would be a neat argument if it wasn't in the wrong order. We actually used to have at least one btrfs developer (Josef Bacik, who's still involved in Fedora and is one of the sponsors of the Change; I think there were more, but not 100% sure) on the payroll back when Red Hat *did* have a plan to use it. He/they moved away from RH to companies that were more enthusiastic about btrfs when RH's strategy changed.
    And to Red Hat's credit, the company was involved very early in the development of Btrfs. Fedora had pioneered the concept of boot-to-snapshot with Btrfs a decade ago, several years before SUSE took the idea and built a better implementation with Snapper. Anaconda's support for Btrfs is still pretty good, even though some of the specific handling around managing existing subvolumes on disk has rotted a bit due to lack of proper care and feeding.

    A confluence of events occurred that made it so things stalled out back then, but here we are a decade later. Let's see how it goes. I'm excited.

    Leave a comment:


  • AdamW
    replied
    Originally posted by gnulinux82 View Post

    It's because they have 0 upstream developers on payroll. They couldn't dictate and dominate development like they do with most other upstream projects, so they just took their football and ran home.

    It's good to see Red Hat playing second fiddle for once. They need a lesson in humility.
    That would be a neat argument if it wasn't in the wrong order. We actually used to have at least one btrfs developer (Josef Bacik, who's still involved in Fedora and is one of the sponsors of the Change; I think there were more, but not 100% sure) on the payroll back when Red Hat *did* have a plan to use it. He/they moved away from RH to companies that were more enthusiastic about btrfs when RH's strategy changed.

    Leave a comment:


  • gnulinux82
    replied
    Originally posted by intelfx View Post
    I read somewhere that Red Hat simply can't/won't commit to supporting such a fast-paced project as btrfs for 10 years. IOWs, it's a backporting nightmare.
    It's because they have 0 upstream developers on payroll. They couldn't dictate and dominate development like they do with most other upstream projects, so they just took their football and ran home.

    It's good to see Red Hat playing second fiddle for once. They need a lesson in humility.

    Leave a comment:

Working...
X