Announcement

Collapse
No announcement yet.

Ubuntu 20.04 Atop ZFS+Zsys Will Take Snapshots On APT Operations

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • oiaohm
    replied
    Originally posted by gregzeng View Post
    1) Both BTRFS & ZFS need be compared to NTFS.​​​​
    This is harder than you think and not going to be a fair compare in a lot of cases.

    Originally posted by gregzeng View Post
    2) Depending on the versions, each have shared properties and important differences.
    [https://en.wikipedia.org/wiki/Compar...f_file_systems
    A fairly good over view of file system features is on the wikipedia. Big thing here that makes is not a fair compare in a lot of cases is that both BTRFS and ZFS have data checksumming and NTFS does not. Microsoft is discontinuing REFS that in a lot of ways would have been the more fair compare BTRFS and ZFS.

    Originally posted by gregzeng View Post
    3) All three partition types can be read - write, with varying degrees of speed, power & reliability, on some Linux operating systems.
    There are 3 things you need. Read, Write, Repair

    Originally posted by gregzeng View Post
    4) NTFS comes in two major copyrights: Microsoft or NTFS-3G.
    5) Only NTFS can be read - write by Windows & Apple operating systems.
    Point 4 NTFS comes under more than 2 licenses you are forgotting the limited read only NTFS in the Linux kernel and Mac OS NTFS driver and the paragon software closed source drivers.

    Point 5 is wrong NTFS-3g and paragon software ntfs can read/write under Linux. Again you missed the important question repair.

    Apple or Linux NTFS solutions cannot repair a damaged NTFS. You have to find a windows machine to repair NTFS. So it can be dangerous to encourage more uses to use NTFS as they may be caught out not owning the software to repair the file system to get their data back.

    Originally posted by gregzeng View Post
    7) Untested it seems are power usage, compression, speed, reliability, and near-full behaviours.​​​​
    This list turns out to be very hard to test. Particularly power usage there are so many was for the driver to be optimised that can throw that all over the ball park.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite


    The reason when you see file system compares they are on one operating system with one kernel if possible is you are sure they are optimised and security mitigated all the same way so now you are looking at difference in file system function not some security update or one off optimisation.

    File system benchmark with windows vs linux part throws a huge set of variables where is Windows in security mitigation where is the Linux kernel in security mitigations. This can throw performance around by huge margins. When I say huge security mitigation have at times seen performance on particular code paths though windows and Linux cut to 1/10 of what they were before due to a security mitigation and then reversed latter by a optimisation. So the compare could be highly miss leading to end users.

    Leave a comment:


  • gregzeng
    replied
    1) Both BTRFS & ZFS need be compared to NTFS.
    2) Depending on the versions, each have shared properties and important differences.
    3) All three partition types can be read - write, with varying degrees of speed, power & reliability, on some Linux operating systems.
    4) NTFS comes in two major copyrights: Microsoft or NTFS-3G.
    5) Only NTFS can be read - write by Windows & Apple operating systems.
    6) All three have file compression.
    7) Untested it seems are power usage, compression, speed, reliability, and near-full behaviours.

    Could Phoronix arrange such a comparison?​​​​

    Leave a comment:


  • oiaohm
    replied
    Originally posted by k1e0x View Post
    Why would they? Complain away. It's a done story. Not even sure what anyone would complain about.. you can fully comply with both licenses at the same time without causing harm to either.
    Its about time you stop that fib.

    The Software Freedom Law Center provides legal representation and other law related services to protect and advance Free and Open Source Software.

    If there exists a consensus among the licensing copyright holders to prefer the literal meaning to the equity of the license, the copyright holders can, at their discretion, object to the distribution of such combinations. They would be asserting not that the binary so compiled infringes their copyright, which it does not, but that their exclusive right to the copying and redistribution of their source code, on which their copyright is maximally strong, is infringed by the publication of a source tree which includes their code under GPLv2 and ZFS filesystem files under CDDL, when that source tree is offered to downstream users as the complete and corresponding source code for the GPL'd binary.

    The reality here is you cannot fully comply with GPLv2 and CDDL ZFS every proper legal review has ruled the same way. So you cannot fully comply with both licenses at the same time its just not legally possible.. So this means you have to argue fair usage problem here courts do you rule inconsistently on fair usage.

    Originally posted by k1e0x View Post
    ZFS won't be in the Linux kernel any more than it's in the macOS kernel. The project is structured now where it can support many OS's. Having it out is no different than your nvidia driver. It's a piece of code that makes your computer work managed by your OS (distro)
    Yes ZFS for Linux is in the same boat as Nvidia binary driver as in depending on fair usage if legal or not. Problem is the without causing harm side you are not doing what Nvidia doing. Nvidia has developers working in the upstream Linux kernel making sure the functions of the Linux nvidia binary blob driver uses they have the patent license to use the technology behind them.

    Can you not work out the danger here. Lets say ZFS for Linux uses a function in Linux it should not due to patents so your end users have not paid the patent license they should have the copyright infringement has now caused damages/harm so now you are guilty of patent infringement and copyright infrignment. Now lets say ZFS for Linux comes licensed under GPLv2 compatible license this does provide some -protection from the OIN group from patent attack that you do not have currently and if there is a patent issue with Linux kernel used technology you cannot be secondary hit with copyright infrignment.


    This is the direct legal side problem its possible to-do harm with a little carelessness that will trigger the copyright infringement problem in ways fair usage will not protect from.

    This patented technologies problem is why it took Nvidia so long under Linux to support PRIME display because there were a lot of internal Linux kernel functions that had to be legally reviewed for patents before Nvidia third party binary driver could come anywhere near them because of the legal nightmare that set off.

    Nvidia has developers working upstream in the Linux kernel to get early information about any possible patent problems because it will blow their fair usage defence on their binary driver to bits..

    Being in a legal grey area is like being minefield if you are careful most case you can walk though it but it just takes one moment of careless or not detecting that some rule/mine has changed for you to be screwed over.

    Nvidia being slow to support Wayland on Linux and the historic bad interactions between framebuffer applications and X11 applications while using nvidia binary drivers also comes from these legal restrictions on what they can do in the binary driver. I guess I can expect when Linux kernel starts doing page table per application stuff that ZFS for Linux driver will not properly support it so be a data leak around that security measure. Nvidia binary driver on Linux is example of the bad side effects of these restrictions like not having proper integration. Holding the Nvidia binary driver up as example is not a good one as it demos many of the problems with the route you are talking about that Nvidia with all funding have not been able to fix.



    Leave a comment:


  • mskarbek
    replied
    Originally posted by k1e0x View Post
    Complain away. It's a done story.
    I'm not complaining but merely stating a fact. I'm using ZoL since 0.6.1 release (~7 years old) and used to this situation.



    Leave a comment:


  • k1e0x
    replied
    Originally posted by mskarbek View Post
    I have this ZFS snapshotting implemented as a DNF hook since Fedora 29, nobody cares because of CDDL. I'm really curious how far will Canonical go on this integration route with OpenZFS and when somebody will start seriously complaining.
    Why would they? Complain away. It's a done story. Not even sure what anyone would complain about.. you can fully comply with both licenses at the same time without causing harm to either.

    ZFS won't be in the Linux kernel any more than it's in the macOS kernel. The project is structured now where it can support many OS's. Having it out is no different than your nvidia driver. It's a piece of code that makes your computer work managed by your OS (distro)

    Originally posted by anarki2 View Post
    Unless you run nothing else during apt transactions, this is bound to cause problems. A separate /home should be mandatory, at the bare minimum.
    Run nothing else? Most likely it will just take a boot environment snapshot. Very common with ZFS enabled OS's, no need to fear.

    ZFS uses datasets and yes, /home is separate (sorta). Ubuntu's defaults is.
    rpool/USERDATA/user(s) -> /home/user(s)
    rpool/USERDATA/root -> /root
    Last edited by k1e0x; 09 March 2020, 02:00 PM.

    Leave a comment:


  • skeevy420
    replied
    Originally posted by starshipeleven View Post
    I guess you will not be migrating to Ubuntu even if it has zfs....
    Heh. I gave it serious consideration but just couldn't do it. But only because I installed Silverblue.

    I might actually have become an **looks the room nerviously** Ubuntu user because of Zsys.

    I happened to find a way of doing things that's been enough to drive me from both ZFS and KDE.

    It's been a weird past few days for me.

    Leave a comment:


  • starshipeleven
    replied
    Originally posted by skeevy420 View Post
    Mainly just Steam games, music, movies, and stuff like that. I might be able to save a bit of space that way, but half was curiosity on zstd:15 and zstd:10 and the other half, oddly enough, was getting off of ZFS. There's only so much out-of-tree/not-with-distribution inconvenience I'm willing to deal with.
    I guess you will not be migrating to Ubuntu even if it has zfs....

    Leave a comment:


  • skeevy420
    replied
    Originally posted by starshipeleven View Post
    you can also free more space with deduplication (depending on what type of data you have)
    Code:
    duperemove -rdhxvb 64k --hashfile=/path/to/hashfile --dedupe-options=same /path/of/folder/to/deduplicate
    This will take a while (probably overnight)
    Mainly just Steam games, music, movies, and stuff like that. I might be able to save a bit of space that way, but half was curiosity on zstd:15 and zstd:10 and the other half, oddly enough, was getting off of ZFS. There's only so much out-of-tree/not-with-distribution inconvenience I'm willing to deal with.

    Leave a comment:


  • starshipeleven
    replied
    Originally posted by skeevy420 View Post
    But speaking of BTRFS and ZFS...I dumped about 2TB of shit from a 3.5TB ZFS drive onto a 1.8TB BTRFS drive and only used 1.5TB of space.
    .
    you can also free more space with deduplication (depending on what type of data you have)
    Code:
    duperemove -rdhxvb 64k --hashfile=/path/to/hashfile --dedupe-options=same /path/of/folder/to/deduplicate
    This will take a while (probably overnight)

    Leave a comment:


  • skeevy420
    replied
    Originally posted by Britoid View Post

    Well Silverblue certainly wont have a ZFS backend, but I don't see much point in a BTRFS backend. It does "snapshots" on a filesystem level using ostree and hardlinks.
    I don't expect it to get either of those, but the update process being able to leverage file system level tools could make it more convenient/faster/something.

    But speaking of BTRFS and ZFS...I dumped about 2TB of shit from a 3.5TB ZFS drive onto a 1.8TB BTRFS drive and only used 1.5TB of space.

    I used compress-force=zstd:15 to achieve that. According to random shit from Reddit...yeah, I know...Zstd's "compress or not compress" algorithm is better than the one BTRFS uses which is supposed to make forcing Zstd faster than just using Zstd when using BTRFS and Zstd together. About the only realistic benchmark would be to run time and run that rsync twice with both compress and compress-force to actually test that.

    On your other comment, after around a month of Silverblue I'm really liking it. You have to be pants-on-head stupid to break your system. The downside is that because it is so new and not-as-used it has its issues and learning curve. For an example, I've found it better to manually add RPMFusion repos and keys to their /etc/place. Doing it their way ends up with local packages and local packages are hell on the updates...layered programs can have their own issues, but those don't fuck with the revert process like the locals do. Doing it my way meant I started getting updates instead of being told there were no updates available. I feel sorry for people who haven't figured out that trick and do the "copy status output, revert layers, update the base image, relayer" method of updating...because there's a goddamn reboot after every step and that reboot can be damn annoying if you use encryption.

    Whoops. Wrote a novel

    Leave a comment:

Working...
X