Announcement

Collapse
No announcement yet.

Ubuntu's Ubiquity Installer Begins Adding ZFS Encryption Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by k1e0x View Post
    I see the exact opposite. I seen for decades btrfs saying "it's stable, it's stable, oh wait.. ok it's stable now".
    The core has been stable for a while, just like you claim for ZFS. What is broken is secondary features that apparently are not that needed by the corporate overlords.

    ZFS was never released in a non-stable state.
    Btrfs development has been opensource from the start, does not mean that it's "released" when you start seeing it upstream.
    Sun used it internally for their company wide NFS home directories for a long time too before release. To this day... far as I know.. nobody has used btrfs like that.
    It's at least 2-3 years that NAS companies use it for commercial products, SUSE/OpenSUSE migrated to that also long ago.

    They had encryption early on.. like say pool version 31? It was broken tho.. (The rumor is this wasn't a priority for Sun and they didn't put their A team on it)
    Just like some other btrfs features that are not working even now.
    The ram usage hasn't changed.. (I mean it's cache, what would change about it?)
    record size. Only in more recent times, where ZoL reworked the codebase, the default record size was increased to 128k, in the original (and FreeBSD port) ZFS it was left at 16k because of performance reasons, and that increased RAM footprint significantly.

    You still can't shrink it because the enterprise didn't envision that ever being a thing. (only home users want to shrink really).
    Just as only home users want RAID 5/6, apparently, since we got recently some RAID1 with 3 or 4 redundancy instead than 100% fully stable RAID5/6

    But yes.. the community did add a ton of features and fixes and improvements. the difference is they were adding them to something mature. The core of ZFS hasn't changed that much from it's original release in pool version 28 in Solaris 10.
    You haven't looked at the codebase, are you even able to look at the codebase? Because there were at least 3-4 major touch ups for core logic until now, for example to add encryption.

    Features aren't just "added on top", you can't just "add on top" with a serious filesystem.

    Comment


    • #22
      Originally posted by starshipeleven View Post
      The core has been stable for a while, just like you claim for ZFS. What is broken is secondary features that apparently are not that needed by the corporate overlords.

      Btrfs development has been opensource from the start, does not mean that it's "released" when you start seeing it upstream.
      It's at least 2-3 years that NAS companies use it for commercial products, SUSE/OpenSUSE migrated to that also long ago.

      Just like some other btrfs features that are not working even now.
      record size. Only in more recent times, where ZoL reworked the codebase, the default record size was increased to 128k, in the original (and FreeBSD port) ZFS it was left at 16k because of performance reasons, and that increased RAM footprint significantly.

      Just as only home users want RAID 5/6, apparently, since we got recently some RAID1 with 3 or 4 redundancy instead than 100% fully stable RAID5/6

      You haven't looked at the codebase, are you even able to look at the codebase? Because there were at least 3-4 major touch ups for core logic until now, for example to add encryption.

      Features aren't just "added on top", you can't just "add on top" with a serious filesystem.
      Show me where SuSE corporate has all their employees on a NFS brtfs pool. (and not just a single volume in their local system) I highly doubt this and.. SuSE (~1750 employees) is a lot smaller than Sun was. (or Oracle, even larger) There is a HUGE difference between a disk in your PC and your enterprise storage shares.

      Sun used that internally for corporate wide NFS.. that is *really* impressive. Microsoft tried this with some of their prototypes and failed. (failed twice, according to my sources.) If SuSE actually is doing it, I would find that impressive and would want to know more about it.

      Yes.. just look.. it's not worth the argument.. they have taken a different path. Btrfs has some really cool things about it and for me I see it as a research filesystem. I find it interesting.. but I use ZFS at work and have done so for a very long time. I don't think there is anything *wrong* with either. They serve different roles and they have a different focus.. and that is ok. I don't even really see them as competitors.
      Last edited by k1e0x; 10 June 2020, 03:57 PM.

      Comment


      • #23
        Originally posted by Alexmitter View Post
        I still don't get why people want a filesystem in Linux that has no change to ever be mainlined as it is clearly not license compatible to the GPL.
        Other question, why should anyone prefer ZFS over BTRFS?

        Could please someone explain that?
        Looking at the specification I find in ZFS two biggest advantages:
        - it has a cache (ARC) integrated, which could mitigate the performance penalties of the cow filesystems
        - it support a more robust raid5/6/7 than BTRFS

        However even BTRFS has its own advantage:
        - it can "easily" shrunk (anyway it is a very costly operation for every filesystem)
        - it support reflink

        From a reliability point of view, in my experience (as desktop from 2008, single disk and raid1 profile) BTRFS was quite solid: I used it even with a broken disks (sometime it stops to working), but I never loose the filesystem.

        In a multi-disks scenario, my feeling is that ZFS is more robust because the sun direct support. However I don't know how this is true for the OpenZFS, which is a derivate work (it is quite hard to make a good SW, but it is quite easy to make a messy SW).
        I am genuinely interested to look to some performance benchmark in order to see the workloads where one performs better than the other.

        Comment


        • #24
          Originally posted by k1e0x View Post
          Yes.. just look.. it's not worth the argument.. they have taken a different path. Btrfs has some really cool things about it and for me I see it as a research filesystem. I find it interesting.. but I use ZFS at work and have done so for a very long time. I don't think there is anything *wrong* with either. They serve different roles and they have a different focus.. and that is ok. I don't even really see them as competitors.
          This means you can ingore what happened.

          Originally posted by kreijack View Post
          Looking at the specification I find in ZFS two biggest advantages:
          - it has a cache (ARC) integrated, which could mitigate the performance penalties of the cow filesystems
          - it support a more robust raid5/6/7 than BTRFS

          Is that more robust raid5/6/7 in fact true. The answer is no it not.
          Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

          Btrfs own built in raid5/6/7 has bugs major ones to the point you are told to use the Linux kenrel software raid instead but this is not the end of story. Zpool stuff of ZFS doing raid5/6/7 falls over on SMR drives interesting enough Btrfs raid 5+ and Linux kernel own software Raid does not.

          Yes people have found ZFS zpool raid 5 have a insane rebuild time when given a SMR drive.

          Originally posted by kreijack View Post
          However even BTRFS has its own advantage:
          - it can "easily" shrunk (anyway it is a very costly operation for every filesystem)
          - it support reflink
          Missed two very big advantages.
          1) Upstream Linux kernel so gets the upstream Linux kernel QA hardware access so items like SMR issues will be detected while the majority of it is still in development labs and patches from those development labs come to fix those issues.
          2) Under a license that all major harddrive vendors and solid state storage vendors agree with.

          Sorry my hate of CDDL license is not my own unique idea WD does not like CDDL either this is not the only vendor of storage devices who does not like CDDL. Being under the wrong license and not mainlined into the Linux kernel does cause its own fair share of problems. Those pushing ZFS don't want to admit there is a problem because then it means having to face up and deal with the License problem that is not going to be simple. Longer those pushing ZFS don't the more end users will get hurt by likes of the recent WD reds with SMR due to not being in the hardware development labs due to license. Please note I am not saying ZFS has to come GPLv2 any license that GPLv2 compatible would do.

          Yes some of btrfs raid 5 issues in it own pool like solution is attempting to work out how todo the raid 5 without having issues on SMR and getting it wrong of course why the patches were designed oddly did not make sense while those who were looking at it had all CMR drives..

          k1e0x is kind of right that Btrfs is a research file-system the reality here is all main ones used by Linux distributions that are built into the Linux kernel source are used by those researching new ways of storage and will receive updates from those doing that research. Items like ZFS for Linux and OpenZFS are out in the cold in this department and has now as well as into the future will cause problems for those using ZFS unless something changes. Those developing physical storage solutions are not going to change CDDL license is incompatible with what they require. Heck GPLv2 is incompatible with what they require but many GPLv2 compatible licenses like apache 2.0 and MIT are compatible with what they require.

          Being incompatible with Linux GPLv2 license should be alarm bells. Of course has not been to ZFS for Linux developers. I did have to wait for the real world examples from SSD or harddrive vendor to turn up. WD with SMR in red drives have been the first cab off ramp. Seagate is also doing SMR in their normal consumer drives. There are going to be more issue like this. All would have been and will be avoided if ZFS for LInux had have fixed the Linux so they could have mainlined their code so their code gets into the research labs where it needs to be.

          Comment


          • #25
            Originally posted by tiennou View Post
            For example, there was a noticeable improvement in 0.8.4 (https://github.com/openzfs/zfs/pull/9749) which i haven't tested yet.
            It's already in Ubuntu 20.10 and should be backported to 20.04 soon:
            == SRU Justification == Upstream commit 31b160f0a6c673c8f926233af2ed6d5354808393 contains AES-GCM acceleration changes that significantly improve encrypted performance. Tests on a memory backed pool show performance improvements of ~15-22% for AES-CCM writes, ~17-20% AES-CCM reads, 34-36% AES-GCM writes and ~79-80% AES-GCM reads on a Sandybridge x86-64 CPU, so this looks like a promising optimization that will benefit a lot of users. == The fix == Backport of upstream 31b160f0a6c673c8f926...

            Comment


            • #26
              Originally posted by oiaohm View Post

              Is that more robust raid5/6/7 in fact true. The answer is no it not.
              Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

              Btrfs own built in raid5/6/7 has bugs major ones to the point you are told to use the Linux kenrel software raid instead but this is not the end of story. Zpool stuff of ZFS doing raid5/6/7 falls over on SMR drives interesting enough Btrfs raid 5+ and Linux kernel own software Raid does not.

              Yes people have found ZFS zpool raid 5 have a insane rebuild time when given a SMR drive.
              I can't follow you. BTRFS might be more performant on SMR/Zoned devices than ZFS. However I would prefer the more robust support of RAID5/6/7 of ZFS. (more below)

              Originally posted by oiaohm View Post
              Missed two very big advantages.
              1) Upstream Linux kernel so gets the upstream Linux kernel QA hardware access so items like SMR issues will be detected while the majority of it is still in development labs and patches from those development labs come to fix those issues.
              2) Under a license that all major harddrive vendors and solid state storage vendors agree with.
              Event tough I don't want to enter in the dispute GPL vs CDDL, I agree with you that the ZFS license issue is a problem, and the better support of SMR/Zoned is the results of the fact that BTRFS in mainlined.

              However I think that if the pressure of the market will push the ZFS, these issue will be solved. And the support of ZFS in Ubuntu is a signal in this direction.

              Frankly speaking, considering the effort of Oracle in "unbreakable linux", I don't understand why ZFS is not released in a more GPL compatible license. Surely there will be a tons of legal issue.

              Comment


              • #27
                Originally posted by oiaohm View Post
                Yes people have found ZFS zpool raid 5 have a insane rebuild time when given a SMR drive.
                I actually imagine this is very true. You want to avoid these for data integrity anyhow though. Don't trust magic invisible firmware to write your blocks out.. it's not a good idea. (and I'm well aware it's done a lot, doesn't mean it's a good idea.)

                For Linux upstream, as I mentioned before. Windows and macOS now have parity status in the OpenZFS project so.. Upstream to Linux isn't really that important when you also can't upstream to macOS or Windows. (Who knows, Apple might take it.. they already have the CDDL in their OS with dtrace)

                But regardless Linux isn't special here and will just be treated like a closed platform.

                Comment


                • #28
                  Originally posted by kreijack View Post
                  I can't follow you. BTRFS might be more performant on SMR/Zoned devices than ZFS. However I would prefer the more robust support of RAID5/6/7 of ZFS. (more below)
                  Linux kernel dm system was also modified for SMR. The Linux kernel mainline stuff works with SMR. Yes the feature advantage of zpool is slowly being reduced by Stratis work in the dm layer.

                  Originally posted by kreijack View Post
                  However I think that if the pressure of the market will push the ZFS, these issue will be solved. And the support of ZFS in Ubuntu is a signal in this direction.
                  Problem is the Ubuntu bit here is false hope. ZFS got into freebsd while harddrive vendors like WD were using it in their development labs and the result was due to CDDL license they removed it from all testing back then. So harddrive vendors are going to keep ZFS out the lab even with ZFS in Ubuntu. Only way to get it into the harddrive makers labs is to change license and get truly mainline Linux kernel compadible.

                  Originally posted by kreijack View Post
                  Frankly speaking, considering the effort of Oracle in "unbreakable linux", I don't understand why ZFS is not released in a more GPL compatible license. Surely there will be a tons of legal issue.
                  Really major legal trouble from the storage vendors they are not going to bring in something into their labs that could have hidden patents to cause trouble that they don't have a patent pool like OIN to hide behind if required. The reality is ZFS license status and possible patent status locks it out in the cold from good support.

                  Originally posted by k1e0x View Post
                  I actually imagine this is very true. You want to avoid these for data integrity anyhow though. Don't trust magic invisible firmware to write your blocks out.. it's not a good idea. (and I'm well aware it's done a lot, doesn't mean it's a good idea.)
                  Except you cannot with ZFS currently unless you are on Linux and are using the linux DM stack. Beside ZFS breaks on SMR when you are not using the magical black box firmware as well.



                  Yes Linux has the means to run host managed SMR and make it appear to unmodified file systems like ZFS to be a old style drive yes ZFS will still break if you do this. Btrfs, XFS and ext4 under Linux have all had patches to support SMR same with the software raid that is part of the Linux kernel.

                  Interesting right that dm-zoned in the Linux kernel is under a MIT license and was made as joint work between samsung and WD. So you can fairly much bet the firmware contents of SMR drives running in Drive Managed or Host Aware are using the same code in their firmware particularly thinking the performance behaviour is basically identical.

                  Originally posted by k1e0x View Post
                  For Linux upstream, as I mentioned before. Windows and macOS now have parity status in the OpenZFS project so.. Upstream to Linux isn't really that important when you also can't upstream to macOS or Windows. (Who knows, Apple might take it.. they already have the CDDL in their OS with dtrace)
                  Question is Mac OS of Windows used by those making storage devices these days. The answer is no. Are harddrive and SSD vendors at this stage investing money in making Windows or Mac OS support of SMR or other new drive tech great the answer again is no.

                  One of the last hold outs in NAS doing freebsd only have recently released a Debian Linux based product. Also OpenZFS on Windows and MacOS might as well be a different file system due to how far feature behind they are getting to ZFS on Linux. This feature behind of Windows and MacOS will come worse when Freebsd is moved over to ZFS on Linux as well.

                  The cross platform support of ZFS if you like it or not k1e0x is falling apart.

                  Originally posted by k1e0x View Post
                  But regardless Linux isn't special here and will just be treated like a closed platform.
                  Except in this case Linux is special. Linux the storage vendors go to operating system these days. You treat Linux when making a file system like a closed source platform expect storage vendors like Samsung, WD, Seagate.... to treat your as some third party black box they cannot see the contents of and screw you if you break it.

                  I like WD recent on red drives they said with their test NAS solutions should be able to work fine. Of course why that was is they tested with Linux kernel mainline and that was the end of their testing. That right no testing if what they did harmed Windows or OS X or Freebsd or any Linux third party drivers like(ZoL). Bad news here all storage vendors are going this way and are legally forced this way.

                  Really k1e0x the part ZFS is on is not long term workable.

                  How many hits will ZFS users take before migrate to something other to ZFS to get away from the lack of support from hardware vendors the license problem is causing particularly with storage media.

                  Or will ZFS open source developers wake they are in a position they have no choice but to change license so there users stop getting harmed and to get support from hardware vendors.

                  Reality k1e0x all you are saying we can keep on putting head in sand to reality.

                  Comment


                  • #29
                    Originally posted by oiaohm View Post
                    Linux kernel dm system was also modified for SMR. The Linux kernel mainline stuff works with SMR. Yes the feature advantage of zpool is slowly being reduced by Stratis work in the dm layer.


                    Problem is the Ubuntu bit here is false hope. ZFS got into freebsd while harddrive vendors like WD were using it in their development labs and the result was due to CDDL license they removed it from all testing back then. So harddrive vendors are going to keep ZFS out the lab even with ZFS in Ubuntu. Only way to get it into the harddrive makers labs is to change license and get truly mainline Linux kernel compadible.



                    Really major legal trouble from the storage vendors they are not going to bring in something into their labs that could have hidden patents to cause trouble that they don't have a patent pool like OIN to hide behind if required. The reality is ZFS license status and possible patent status locks it out in the cold from good support.



                    Except you cannot with ZFS currently unless you are on Linux and are using the linux DM stack. Beside ZFS breaks on SMR when you are not using the magical black box firmware as well.

                    https://www.kernel.org/doc/html/late.../dm-zoned.html

                    Yes Linux has the means to run host managed SMR and make it appear to unmodified file systems like ZFS to be a old style drive yes ZFS will still break if you do this. Btrfs, XFS and ext4 under Linux have all had patches to support SMR same with the software raid that is part of the Linux kernel.

                    Interesting right that dm-zoned in the Linux kernel is under a MIT license and was made as joint work between samsung and WD. So you can fairly much bet the firmware contents of SMR drives running in Drive Managed or Host Aware are using the same code in their firmware particularly thinking the performance behaviour is basically identical.



                    Question is Mac OS of Windows used by those making storage devices these days. The answer is no. Are harddrive and SSD vendors at this stage investing money in making Windows or Mac OS support of SMR or other new drive tech great the answer again is no.

                    One of the last hold outs in NAS doing freebsd only have recently released a Debian Linux based product. Also OpenZFS on Windows and MacOS might as well be a different file system due to how far feature behind they are getting to ZFS on Linux. This feature behind of Windows and MacOS will come worse when Freebsd is moved over to ZFS on Linux as well.

                    The cross platform support of ZFS if you like it or not k1e0x is falling apart.



                    Except in this case Linux is special. Linux the storage vendors go to operating system these days. You treat Linux when making a file system like a closed source platform expect storage vendors like Samsung, WD, Seagate.... to treat your as some third party black box they cannot see the contents of and screw you if you break it.

                    I like WD recent on red drives they said with their test NAS solutions should be able to work fine. Of course why that was is they tested with Linux kernel mainline and that was the end of their testing. That right no testing if what they did harmed Windows or OS X or Freebsd or any Linux third party drivers like(ZoL). Bad news here all storage vendors are going this way and are legally forced this way.

                    Really k1e0x the part ZFS is on is not long term workable.

                    How many hits will ZFS users take before migrate to something other to ZFS to get away from the lack of support from hardware vendors the license problem is causing particularly with storage media.

                    Or will ZFS open source developers wake they are in a position they have no choice but to change license so there users stop getting harmed and to get support from hardware vendors.

                    Reality k1e0x all you are saying we can keep on putting head in sand to reality.
                    Yes, all my dreams are crashing down. lol whatever.

                    My only response here is a technical one in that you can just choose not to buy SMR for datacenter use. It seems pretty sketchy to me and I think it makes ZFS's checksums *MORE* important, not less.

                    I think hard drive vendors here are being lazy with this one.. Fixing the problem with physics in firmware is a pretty poor solution. Reminds me of Boeing trying to fix the 737 MAX with firmware.. look how well that went.

                    I have a feeling SMR is going to be a short lived, error prone technology.
                    Last edited by k1e0x; 11 June 2020, 07:08 PM.

                    Comment


                    • #30
                      Originally posted by k1e0x View Post
                      My only response here is a technical one in that you can just choose not to buy SMR for datacenter use. It seems pretty sketchy to me and I think it makes ZFS's checksums *MORE* important, not less.
                      This is head in sand. The highest density data hard drives are all SMR for the Datacentre.

                      We are not talking a small difference here. SMR allows at least 25% more storage on the same drive mechanism. Some of the early models of SMR drives allowed you to put CMR firmware on them but they lost 25%+ of their capacity instantly.

                      Originally posted by k1e0x View Post
                      I have a feeling SMR is going to be a short lived, error prone technology.
                      Harddrive vendors have already given their 10 year into future roadmap on drives that roadmap includes the end to CMR production.

                      Remember you need 25% less material to provide the same capacity with SMR this is why SMR is appearing in desktop drivers hidden behind firmware(drive managed SMR) so Windows or OS X works. People want cheaper harddrives.

                      Fun part is SMR higher capacity can in fact give higher read speeds and higher write speeds than the prior CMR/PMR(depend on what vendor name you are using for the old tech). Big thing here SMR does not like random writes.



                      Yes SMR behaviour is all documented.

                      SMR for a will designed copy on write file system that is SMR compadible should be a perfect match as in in fact giving more storage and more performance than the prior tech. Problem here is when the file system drives the storage in ways that are incompatible with SMR really hurts badly on SMR drives.

                      Please note this issue of firmware being used to fix SMR like issue does not start with SMR. The bank sizes inside SSD drives have the same behaviour problem as SMR drives where complete banks for flash has to be cleared as a whole that are not 4k sectors. Yes SSD drives stalling out with ZFS when getting full is the same problem where you are random-ally writing to something that does not like random writes.

                      Sorry ZFS was designed for CMR/PMR harddrives and ram drives these devices are happy with random writes. Problem is SMR harddrives and lots of modern SSD are not happy with random writes.

                      Technology has past the current ZFS design by. But XFS and Ext4 do show that it not impossible for ZFS to be fixed for SMR but you will end up with differences on disc to the prior version. Also you will not have access to the next generation of drive tech while ZFS remained under the license it is so is going to keep on getting hit by left field by these things until that license changes.

                      Comment

                      Working...
                      X