Announcement

Collapse
No announcement yet.

Ubuntu 20.04 Atop ZFS+Zsys Will Take Snapshots On APT Operations

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by anarki2 View Post
    The problem is that the use of a separate /home is neve enforced and that stuff is written everywhere, not just in /home. /home is the most frequent one, but not the only one. That's why I said, "at the bare minimum".
    What does that mean? Are you talking about partitions? In that case, I think you should probably read more and write less. Both btrfs and zfs obviously supports reverting parts of the filesystem.

    But for example, if you have a DB engine running, it won't exactly be thankful for rolling back to a mid-transaction state. All system-wide stuff you do will be reverted. This approach begs for apocalypse. apt has to deal with transactions within its own realm, not on the filesystem level.
    Why would everything system-wide be reverted because of an apt upgrade? That makes no sense. Do you have any sources for your claims or are you just making it up as you go along?

    Comment


    • #22
      Originally posted by elatllat View Post
      What has apt/yum/etc ever broken anyway?
      I have some horror stories with apt, but in general the goal here is to have a way to rollback an update to avoid some breaking bug in the package you have just updated (so it's not a problem of apt/yum/whatever), that can and does happen sometimes, even on distros that aren't from Canonical
      Last edited by starshipeleven; 03-07-2020, 09:50 PM.

      Comment


      • #23
        Originally posted by anarki2 View Post
        All system-wide stuff you do will be reverted. This approach begs for apocalypse.
        I would think that they will use one dataset for each important Linux directory, like they currently do by default for Ubuntu installs over BTRFS, plus you can fine tune it too when you install Ubuntu. So if you roll back your apt modified dataset, the other datasets won't be disturbed.

        In fact, I think that ZFS for root is available in 19.10, so testing it there should give us pretty good idea as to where things are going.

        Comment


        • #24
          Originally posted by mskarbek View Post
          Insulting how? BtrFS is worked on and improved for how many years? It still doesn't work as it should, so I'm not surprised that Canonical started working with something that is actually usable. I'm just wondering how long this will go until Linux devs reexport everything as GPL-only to "punish" Canonical and others who would like to have something useful as a file system.
          Wrong logic. But they are in trouble long term from the upstream changes.

          Unexporting kallsyms_lookup_name() at lwn that is still subscription gives you a major writing on the wall problem.
          https://lkml.org/lkml/2020/2/21/1266
          Its this patch. The means for a non GPLv2 module to access GPLv2 protected functions is going way. Please note module had to fib to use this GPL exported function now it not being exported at all. But this is only tip of iceberg.

          Next iceberg coming is the plan to use more than 1 page table for kernel mode.

          https://www.phoronix.com/scan.php?pa...ess-Space-2020

          The work coming out of this. Once you start using more than 1 page table in kernel space things get interesting to say the least. Because you can now say if something is in a file system module it can only access X list of of exported functions this does not matter if the module is flag as gplv2 compatible or not because you don't map into memory any function that area of code should not access. So the list of functions open to third party kernel modules for the Linux kernel will come way more restricted in time based on what the driver is even for in tree kernel modules.




          Comment


          • #25
            Originally posted by DanL View Post
            We've already had complainers like the FSF. But the only complaint that matters is if Oracle decides to sue. And Canonical has already cleared their action with the SFLC, so grounds for that would be shaky.
            You need to read the SFLC report on the issue is not clear legally and it in fact operating in the shades of grey of law.
            https://www.softwarefreedom.org/reso...rnel-cddl.html

            If there exists a consensus among the licensing copyright holders to prefer the literal meaning to the equity of the license, the copyright holders can, at their discretion, object to the distribution of such combinations. They would be asserting not that the binary so compiled infringes their copyright, which it does not, but that their exclusive right to the copying and redistribution of their source code, on which their copyright is maximally strong, is infringed by the publication of a source tree which includes their code under GPLv2 and ZFS filesystem files under CDDL, when that source tree is offered to downstream users as the complete and corresponding source code for the GPL'd binary.


            This paragraph you should read and take with a huge warning. CDDL + GPLv2 may not be compatible is what that means. If someone(oracle) chooses to object you then have to do the following paragraph.

            In response to such an objection, all distributors would no doubt cease distributing such combinations, which it would remain perfectly legal and appropriate for users to make for themselves. An objectively reasonable good-faith belief that the conduct falls within the equity of the license, until such time as the licensors state that they are interpreting the license literally, would be a full defense against claims of intentional infringement.

            This is in the SFLC report on it. Yes you can choose to do CDDL for now but you have to be willing to rip it out at a moment notice if you get objections. So you install on ZFS and the new install disc for Ubuntu in future comes without ZFS because orcale or someone decided to dispute the CDDL + GPLv2 mix does not sound particularly fun to me.

            SFLC does not really defend you position it. Its more like that you could drive 5 km over the speed limit here for a long time because the police would not charge you but when they said they were going to absolute limit you had to change your behavour straight away. Reality is CDDL and GPLv2 is in equally the same position that is acceptable for now but a single stroke of a pen could see that revoked overnight.

            Comment


            • #26
              Originally posted by skeevy420 View Post

              Don't you mean Stratis?

              I wish Silverblue had a ZFS or BTRFS backends. ZFS+Zsys has a really good chance at giving both Suse's BTRFS snapshot solution and Fedora's Silverblue atomic solution runs for their money. I'm really hoping that Stratis is Hat answer to ZFS+Zsys.
              Well Silverblue certainly wont have a ZFS backend, but I don't see much point in a BTRFS backend. It does "snapshots" on a filesystem level using ostree and hardlinks.

              Comment


              • #27
                Originally posted by Britoid View Post
                Well Silverblue certainly wont have a ZFS backend, but I don't see much point in a BTRFS backend. It does "snapshots" on a filesystem level using ostree and hardlinks.

                This brings up one of the common defects with file system snaphot. Normally file system snapshot not fine grained enough. lvm and btrfs snapshoting have been tried by different Linux systems in the past and windows have attempted recovery snapshots. All with lot of the same problems.

                ostree allows per package snapshots as well as systemwide.

                Do notice that silverblue goes core operating system read only to everything bar the update system. This forces uses to keep there personal modifications away from the package tracked files this is kind of required to make sure OS snapshots are clean and you are not snapshotting personal files. This here is really a key difference.


                How are you got to keep user personal files and operating system core files split so when you snapshot you will roll back the core OS and leave the personal files alone at first in case of problem. Yes it can be required to roll back personal files in some cases but this need to be not the to default go to.

                Its a lot harder to make user useful snapshotting that a lot would think. There are many ways to make snapshotting that is in fact user-harmful because it making users own files disappear from view.

                Comment


                • #28
                  Originally posted by oiaohm View Post


                  This brings up one of the common defects with file system snaphot. Normally file system snapshot not fine grained enough. lvm and btrfs snapshoting have been tried by different Linux systems in the past and windows have attempted recovery snapshots. All with lot of the same problems.

                  ostree allows per package snapshots as well as systemwide.

                  Do notice that silverblue goes core operating system read only to everything bar the update system. This forces uses to keep there personal modifications away from the package tracked files this is kind of required to make sure OS snapshots are clean and you are not snapshotting personal files. This here is really a key difference.


                  How are you got to keep user personal files and operating system core files split so when you snapshot you will roll back the core OS and leave the personal files alone at first in case of problem. Yes it can be required to roll back personal files in some cases but this need to be not the to default go to.

                  Its a lot harder to make user useful snapshotting that a lot would think. There are many ways to make snapshotting that is in fact user-harmful because it making users own files disappear from view.
                  Well in the case of btrfs you would use subvolume for /home.

                  I've been using Silverblue for 2 years now and love it. I think it's update model is much better than the previous patching of the file system.

                  Comment


                  • #29
                    Originally posted by Britoid View Post

                    Well Silverblue certainly wont have a ZFS backend, but I don't see much point in a BTRFS backend. It does "snapshots" on a filesystem level using ostree and hardlinks.
                    I don't expect it to get either of those, but the update process being able to leverage file system level tools could make it more convenient/faster/something.

                    But speaking of BTRFS and ZFS...I dumped about 2TB of shit from a 3.5TB ZFS drive onto a 1.8TB BTRFS drive and only used 1.5TB of space.

                    I used compress-force=zstd:15 to achieve that. According to random shit from Reddit...yeah, I know...Zstd's "compress or not compress" algorithm is better than the one BTRFS uses which is supposed to make forcing Zstd faster than just using Zstd when using BTRFS and Zstd together. About the only realistic benchmark would be to run time and run that rsync twice with both compress and compress-force to actually test that.

                    On your other comment, after around a month of Silverblue I'm really liking it. You have to be pants-on-head stupid to break your system. The downside is that because it is so new and not-as-used it has its issues and learning curve. For an example, I've found it better to manually add RPMFusion repos and keys to their /etc/place. Doing it their way ends up with local packages and local packages are hell on the updates...layered programs can have their own issues, but those don't fuck with the revert process like the locals do. Doing it my way meant I started getting updates instead of being told there were no updates available. I feel sorry for people who haven't figured out that trick and do the "copy status output, revert layers, update the base image, relayer" method of updating...because there's a goddamn reboot after every step and that reboot can be damn annoying if you use encryption.

                    Whoops. Wrote a novel

                    Comment


                    • #30
                      Originally posted by skeevy420 View Post
                      But speaking of BTRFS and ZFS...I dumped about 2TB of shit from a 3.5TB ZFS drive onto a 1.8TB BTRFS drive and only used 1.5TB of space.
                      .
                      you can also free more space with deduplication (depending on what type of data you have)
                      Code:
                      duperemove -rdhxvb 64k --hashfile=/path/to/hashfile --dedupe-options=same /path/of/folder/to/deduplicate
                      This will take a while (probably overnight)

                      Comment

                      Working...
                      X