Announcement

Collapse
No announcement yet.

Ubuntu Puts Out A ZFS Reference Guide

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Ubuntu Puts Out A ZFS Reference Guide

    Phoronix: Ubuntu Puts Out A ZFS Reference Guide

    With Canonical heavily promoting ZFS for Ubuntu 16.04 with the file-system support being added to their default kernel, their latest work is on creating an Ubuntu ZFS guide for those wanting to play with this advanced file-system...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Just a question: are there any references to a writeup of why Canonical has chosen to use ZFS instead of BTRFS?

    I am just curious for their evaluation and perhaps status of BTRFS in Ubuntu. I don't care for flamefests or half-assed personal opinions.

    Thanks.

    Comment


    • #3
      Originally posted by Veto View Post
      Just a question: are there any references to a writeup of why Canonical has chosen to use ZFS instead of BTRFS?

      I am just curious for their evaluation and perhaps status of BTRFS in Ubuntu. I don't care for flamefests or half-assed personal opinions.

      Thanks.
      well i'm not sure if there are actual references comparing both recent enough to be fair but i can tell you that ZFS work amazingly well in Linux(in ArchLinux we have this since forever from AUR and pkgbuild.com) and it support some features that are not present in BTRFS yet like Deduplication, automatic fast and error free resilvering and pool export.

      Additionally ZFS have a decade of strong testing and support from the badasses from the storage world and is pretty well supported for Databases hot Volumes whreas BTRFS is still very young and untested in several of this scenarios.

      So from a technical PoV and a bussiness PoV ZFS is far superior choice compared to BTRFS for at least few more years.

      Comment


      • #4
        Originally posted by Veto View Post
        Just a question: are there any references to a writeup of why Canonical has chosen to use ZFS instead of BTRFS?

        I am just curious for their evaluation and perhaps status of BTRFS in Ubuntu. I don't care for flamefests or half-assed personal opinions.

        Thanks.
        CoreOS switched from BTRFS to EXT4 + OverlayFS some time ago, so apparently BTRFS isn't very stable yet..

        Comment


        • #5
          Originally posted by Veto View Post
          Just a question: are there any references to a writeup of why Canonical has chosen to use ZFS instead of BTRFS?

          I am just curious for their evaluation and perhaps status of BTRFS in Ubuntu. I don't care for flamefests or half-assed personal opinions.

          Thanks.

          Just my opinion but i think that should be quite objective:
          ZFS is stable and well tested. Btrfs is still "the next great thing". Of course there are some big companies running it already, but those companies hire a bunch of guys to keep btrfs not eating up their data. Shipping Btrfs to commercial customers still seems to be more risky than shipping zfs.

          So Canonical/ubuntu didnt do a NIH but just use a more stable option. And again everyone is raging how could Canonical/ubuntu dare to do this. This world is really crazy.

          Comment


          • #6
            It also seems like the Btrfs momentum has slowed down, some towards the end of 2015 and more-so in 2016. We haven't seen much recent news about it.

            However I'm not a big fan of Canonical shipping Ubuntu with their "Custom" kernel. They had to do this because of BSD licensing restrictions on ZFS, right? So that means Ubuntu's default Kernel is tainted.

            Comment


            • #7
              Originally posted by wodencafe View Post
              It also seems like the Btrfs momentum has slowed down, some towards the end of 2015 and more-so in 2016. We haven't seen much recent news about it.
              No news doesn't mean the developers aren't working on it. It's actually good if they are NOT adding more features. What they need to do is fix bugs, it's not heroic work, doesn't sounds flashy or anything like that. But it makes it usable to more people.

              Comment


              • #8
                What never convinced me about ZFS is that it seems have an easy and happy "waste oriented" resources and device space management, just figure out first design choice: 128-bit addressing (WTF??), 320 bytes for every 1 cluster to dedup (if SHA256 just takes 32 what hell does with 320?)...

                Yet with a 64-bit addressing and a 4K cluster you can address 16bilions of 4TB block devices.

                I find to justify part of its fast growing in sense of features, testing and adoption because they extend the on disk format and logics going easy way on space/resources eating at the expense of a more accurate pre-design phase (that takes far more research and time), instead IMHO one of the 1st qualities of a file-system is how smartly and tightly manages resources, free space, indexes and metadata in general, only that approach can really produces something that can scale dynamically.
                Instead its implementation scales mostly because of hardware, they add features and features non-stop in such way that I never seen in any other delicate project like a file-system is.

                No offense for people involved, but reading articles here and there to me ZFS seems going ahead for fast growing/diffusion because they have to test something that is not tightly designed/planned (at the cost of leaning happily on resources).

                Comment


                • #9
                  Originally posted by man-walking View Post
                  What never convinced me about ZFS is that it seems have an easy and happy "waste oriented" resources and device space management, just figure out first design choice: 128-bit addressing (WTF??), 320 bytes for every 1 cluster to dedup (if SHA256 just takes 32 what hell does with 320?)...

                  Yet with a 64-bit addressing and a 4K cluster you can address 16bilions of 4TB block devices.

                  I find to justify part of its fast growing in sense of features, testing and adoption because they extend the on disk format and logics going easy way on space/resources eating at the expense of a more accurate pre-design phase (that takes far more research and time), instead IMHO one of the 1st qualities of a file-system is how smartly and tightly manages resources, free space, indexes and metadata in general, only that approach can really produces something that can scale dynamically.
                  Instead its implementation scales mostly because of hardware, they add features and features non-stop in such way that I never seen in any other delicate project like a file-system is.

                  No offense for people involved, but reading articles here and there to me ZFS seems going ahead for fast growing/diffusion because they have to test something that is not tightly designed/planned (at the cost of leaning happily on resources).





                  Relating efficiency:

                  Well, why isn't reiser4 then the default filesystem in the Linux kernel ?

                  it has a pluggable design, is very efficient on the resource with its dancing trees, tail packing, etc. and also supports checksums & more

                  Comment


                  • #10
                    Originally posted by Veto View Post
                    Just a question: are there any references to a writeup of why Canonical has chosen to use ZFS instead of BTRFS?
                    Because customers have been asking for it for years! For many important corporate user, it makes it easy to move from Solaris to Linux. There's just tremendous amount of sysadmin knowledge there that's ZFS-specific. Btrfs is very interesting, but it's not quite there yet in terms of maturity and even feature parity.

                    Most home users shouldn't care about this at all -- you're better off with ext4 for home desktop system. This is for admins or advanced users who are managing complex disk farms.

                    Comment

                    Working...
                    X