Announcement

Collapse
No announcement yet.

Approved: Fedora 33 Desktop Variants Defaulting To Btrfs File-System

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by kloczek View Post

    Deftragmentation is completly not needed.
    ZFS uses SLAB allocator which prevents fragmentation.
    Yeah, no. That's a complete and utter lie.

    First, the only thing slab allocators do is they put a hard 50% low boundary on internal fragmentation (or, rather, internal utilization). Slab allocators do not guarantee anything else.

    As for external fragmentation, slab allocators typically _worsen_ it. Imagine writing 128K of data in one write(), and then 64K more. With zfs, you are guaranteed to fragment at this boundary, because different slabs _will_ be used for 128K and 64K writes.

    Also you get funny fireworks when your disk is almost full and there are no slabs left for your chosen block size. Read this thread backwards, someone has already mentioned the absolutely wondrous hack zfs had to do to fix this.

    Originally posted by kloczek View Post
    On Linux SLAB allocator is used in memory management. Did you heard that someone is doing "RAM defrgmentation" on Linux?
    Everyone is doing RAM defragmentation on Linux.

    Google "memory compaction", you will be surprised.

    Works for user pages only, of course. Do you know the single most significant source of non-movable pages in Linux (also the reason why you generally can't allocate 1G hugepages post boot, or also the reason why stuff like CMA with boot-time memory reservation had to be invented for non-scatter-gather DMAs out there in the embedded hw)? You guessed right, slabs.
    Last edited by intelfx; 16 July 2020, 08:05 PM.

    Comment


    • Whoo boy, this thread sure is...something.

      So just to comment on a few things:

      1. This is very much a Fedora decision. You know how we keep saying that Fedora isn't just a beta for RHEL? Well...consider this as Exhibit A. This isn't happening in Fedora because it's going to happen in RHEL 9 next. It's happening in Fedora because some people very definitely wearing Fedora hats wanted it to happen in Fedora, and convinced FESCo that it should. (Also note that there is an element of experiment about this Change: it's landing on the explicit understanding that we might decide, in a few days or weeks or months, that it was a bad idea and we should change back.) It's not coming from the folks who set RHEL's storage strategy (take a look at the names on the Change proposal) and it does not tell you anything about changes to that storage strategy. On this topic - note that this Change is to the Fedora *Workstation* default. Fedora *Server*'s default remains xfs-on-LVM.

      2. As the Change has landed *so far*, we are kinda intentionally not doing too much stuff to take advantage of btrfs' super shiny advanced features, broadly on the "learn to walk before you learn to run" principle. Just changing the default FS is a pretty major first step, and we want to shake that out thoroughly before we start piling more changes up on top of it. But we reserve the right to start pilin' away in future.

      3. Can you folks stop calling each other morons please? Pretty please? Thanks!

      Comment


      • Originally posted by starshipeleven View Post
        I'm probably just a noob and used the wrong name. I just added the ZFS pool to libvirt (as raw zfs pool so it's using it like it would for LVM, it's not just a filesystem with virtual disk files in it), it created the VM disks on its own, so if it's doing ZVOLs by default I'm using zvols.
        Plus a couple SSDs for "log" (in mirror) and "cache".


        Yeah, most people think it's some magic thing that will shrink their data, but it's good only if you have A LOT of duplicated data and that's uncommon. Trasparent compression is what most people can benefit for.
        Yeah, it seems like the UI was written to just give it full pool control, (with zvols). ..and in a lot of cases that would be fine but if you want to nest it in a dataset (say pool/kvm) you need to use the work around I mentioned.

        Also an oddity about libvirt, it supports bhyve as a hypervisor. tho poorly. Would be cool to see this fixed.

        Comment


        • Originally posted by intelfx View Post
          I read somewhere that Red Hat simply can't/won't commit to supporting such a fast-paced project as btrfs for 10 years. IOWs, it's a backporting nightmare.
          It's because they have 0 upstream developers on payroll. They couldn't dictate and dominate development like they do with most other upstream projects, so they just took their football and ran home.

          It's good to see Red Hat playing second fiddle for once. They need a lesson in humility.

          Comment


          • Originally posted by gnulinux82 View Post

            It's because they have 0 upstream developers on payroll. They couldn't dictate and dominate development like they do with most other upstream projects, so they just took their football and ran home.

            It's good to see Red Hat playing second fiddle for once. They need a lesson in humility.
            That would be a neat argument if it wasn't in the wrong order. We actually used to have at least one btrfs developer (Josef Bacik, who's still involved in Fedora and is one of the sponsors of the Change; I think there were more, but not 100% sure) on the payroll back when Red Hat *did* have a plan to use it. He/they moved away from RH to companies that were more enthusiastic about btrfs when RH's strategy changed.

            Comment


            • Originally posted by AdamW View Post

              That would be a neat argument if it wasn't in the wrong order. We actually used to have at least one btrfs developer (Josef Bacik, who's still involved in Fedora and is one of the sponsors of the Change; I think there were more, but not 100% sure) on the payroll back when Red Hat *did* have a plan to use it. He/they moved away from RH to companies that were more enthusiastic about btrfs when RH's strategy changed.
              And to Red Hat's credit, the company was involved very early in the development of Btrfs. Fedora had pioneered the concept of boot-to-snapshot with Btrfs a decade ago, several years before SUSE took the idea and built a better implementation with Snapper. Anaconda's support for Btrfs is still pretty good, even though some of the specific handling around managing existing subvolumes on disk has rotted a bit due to lack of proper care and feeding.

              A confluence of events occurred that made it so things stalled out back then, but here we are a decade later. Let's see how it goes. I'm excited.

              Comment


              • Originally posted by starshipeleven View Post
                Dropbox added again btrfs and all other noteworthy filesystems after a few months https://help.dropbox.com/installs-in...m-requirements

                A Dropbox folder on a hard drive or partition formatted with one the following file system types:
                • ext4
                • zfs (on 64-bit systems only)
                • eCryptFS (back by ext4)
                • xfs (on 64-bit systems only)
                • btrfs

                Oh, well too bad I am using f2fs , they definitely don't support it.

                Comment


                • Originally posted by starshipeleven View Post
                  bullshit https://forum.proxmox.com/threads/ho...fs-pool.42931/
                  Hey guys.. This is somewhat of a question, issue, and "viability of a feature request" all rolled into one. First , ZFS is awesome, and I use it to host VMs. I am looking seriously into ZFS as a so...

                  Automation of snapshotting a zfs pool and sending it to S3, destroy the pool, recreate, and import the snapshot in order to fix ZFS Fragmentation - salesforce/zfs_defrag


                  https://www.kernel.org/doc/html/late...compact-memory
                  echo 1 > /proc/sys/vm/compact_memory
                  It's not commonly needed as only some specific workloads need it, but yes sometimes it's necessary.
                  You are talking about some OpenZFS issues.
                  Againa as ZFS is using modified SLAB allocator defragmentation is not needed.

                  Comment


                  • Originally posted by pal666 View Post
                    by lowering available memory(much more precious resource). it's not hard to implement, it's just not very useful, that's the only reason it's not implemented yet for btrfs. and it's called inband, online means "without unmount"
                    You are writing that without using even one time ZFS.
                    Shame.
                    Intuition completely misguides everyone when it comes to things above some level of complexity.

                    Comment


                    • Originally posted by starshipeleven View Post
                      Dunno, it still shrinks dramatically the space used, if the data can be deduped.
                      Also it's ONLINE even if it isn't done trasparently. OFFLINE means that the filesystem is unmounted, not even crap like NTFS needs to be unmounted to be deduplicated (on servers where you can actually deduplicate it)

                      All CoW filesystems can become fragmented if large files are edited constantly (like VMs and databases). SLAB isn't magic. I'm not talking of NTFS level of bullshit instant-fragmentation, but will still eventually happen.

                      Btrfs has at least autodefrag, so it will automatically deal with it. Not that it matters much for databases or VMs because its performance with such workloads is complete garbage, but it's ok for anything else.
                      So what is wrong with fragmentation?
                      Are you aware of very simple facts that most of the read workloads are random and only very small number of those workloads are sequential?
                      What happens when distribution of the blocks is random and you are reading randomly? Mostly nothing .. you have ~the same level of randomness on reads.
                      In my entire career using ZFS since Solaris 10 beta (~19y) I never ever had needs to defragment anything.
                      Other thing which is reducing any fragmentation impact is proper caching of the data and metadata. Here ZFS is superior As that caching is not generic one but well designed for only ZFS it provides one of the highest effectiveness of that caching ever in the entire history of all operating systems.
                      Really you would know all that if you would at least have been using ZFS. As long as you never been using this conversation is like talk about some food with someone who never had opportunity to taste that food.
                      Try it than we can talk ..

                      Comment

                      Working...
                      X