If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.
OpenZFS 2.3-rc4 Released With Linux 6.12 LTS Support
I'm not sure what you mean by BPW. They implemented block cloning in 2.2, so they should have most of the building blocks for the type of deduplication that I was talking about.
Basically I just want to take a dataset, scan it for duplicate blocks in a one-time process (not constantly), then use block cloning to remove the duplicate blocks.
Last edited by Chugworth; 14 December 2024, 12:12 PM.
I see a lot of chat about the deduplication issue, but here's a question... Do you know how many times a datacenter will run a deduplication to ensure it worked right? Several times.
Things like deduplication aren't done commonly on workstations and desktops in the slightest. Trying to say deduplication is a problem for everyone is like spitting in the lake and saying it has more liquid now, forgetting about evaporation, pumping, run off, draining, etc.
And yes OOT is actually easier to maintain with dkms. Dkms allows version control, which is partially why LTS kernels exist, to allow stable kernels to stay stable against things that could bring instability with a driver change. Is it good to use dkms with mainline kernels too? Yes, it needs to be done.
The only issue I had ever with ZFS was libpcap following a rebuild, nuked a test rig because something changed in libpcap that broke the zfs automount and instead of mounting the volume, deleted the volume instead.
hello, this is my first comment here after many years of reading marvellous articles from Michael.
I'm a very satisfied user of openZfs on my NAS and my laptops. I'm keeping all my precious datas (photos,videos, documents, movies etc). My openZfs works like a charm and never suffered any loss in many years (at least for photos-videos as I duplicate everything on external disks and calculates sha512 for everything that doesn't change over time like photos/videos). I'm working with OMV and proxmox kernel.
Snapshots, scrubbing, write and read constant checks, etc make openZfs the best filesystem ever to keep its data safe.
Obviously I don't use deduplication as it's a very complicated mechanism requiring lot of RAM and scrambling data at every write... so not very safe! Same for encryption. Keep it simple if you want stable and reliable filesystem.
"in tree" is not a quality assurance. At least for out of tree you can decide if you want to upgrade your module to the newest version or if not. Sometimes it makes sense not to upgrade and wait to see if bugs in newer module versions pop up. This is not possible with in-tree.
btrfs (in-tree) had several data corruption bugs in the past. And you can not get around it because it is coming to you automatically with the kernel update. That is really a disaster. Even ext4 (in-tree) had silent data corruption bugs in the past (kernel 6.1.64/6.1.65). debian 12.3 release was delayed due to this.
bcachefs (in-tree) is so new that it is fair to assume that it will have its own bugs sooner or later.
You talk about zfs users as "out of tree fanatics"? I would rather call you an "in tree fanatic".
It's THE quality assurance. Anything else is quality assurance on behalf of some external developers.
Hard to think of any filesystems other than ZFS that have had only one data-corrupting bug in 20 years, especially one that required extremely difficult conditions to trigger (both very tightly timed reading/writing to the same file, and it had to have holes, and it only corrupted reads not on-disk data). If there's been another data-corrupting bug in ZFS I'd really like to hear about it, because I don't know of any others in the last 20 years. (except maybe some hypothetical stuff with misuse of encrypted datasets?)
because it has had many. So you are hard to think at any.
Oh wunderful. Yet another forum thread living up to the reputation of Moronix with all of the whiny (snarky?) comments about ZFS.
Don't you think it's expected to be a to-and-forth battle seeing all the insane comments seen here against a truly next-gen filesystem? Only because some fanboys like to cling onto the monstrous piece of out of kernel code instead of looking at the facts. Most people can understand that having several times the amount of code to achieve similar features means there is several times more bugs in there.
(this applies to btrfs also, it's so huge one cannot come to other conclusion than the design being wrong, and force is used to extend it)
It's THE quality assurance. Anything else is quality assurance on behalf of some external developers.
What do you mean? In-tree quality assurance is the best? Well, you certainly have no clue.
Is an "internal" developer any better than an "external" developer? Are all these internal developers the cream de la cream of software development with the best in class QA processes? No, they are not. They just offered kernel 6.12.2 which was not booting and had to be fixed with kernel 6.12.3 just one day later. Excellent QA at work. How come that 6.12.2 was even released? It was not tested at all.
And what about all the regressions related to btrfs and even ext4 in the past? How did they test this stuff? Do you know? No you dont. If you are really interested to learn how state of the art software development works incl. a full featured test suite with test bots testing new features and dozens of known pit falls in ZFS for various operating systems and full transparency about everything you need to check out openzfs on github.
What do you mean? In-tree quality assurance is the best? Well, you certainly have no clue.
Is an "internal" developer any better than an "external" developer? Are all these internal developers the cream de la cream of software development with the best in class QA processes? No, they are not. They just offered kernel 6.12.2 which was not booting and had to be fixed with kernel 6.12.3 just one day later. Excellent QA at work. How come that 6.12.2 was even released? It was not tested at all.
And what about all the regressions related to btrfs and even ext4 in the past? How did they test this stuff? Do you know? No you dont. If you are really interested to learn how state of the art software development works incl. a full featured test suite with test bots testing new features and dozens of known pit falls in ZFS for various operating systems and full transparency about everything you need to check out openzfs on github.
In tree you have the kernel community assuring the quality. Out of tree you have you assuring it.
You can be as activist and zfsist as you want, you won't convince that it's better to use something that has not been vetted by the kernel community compared to if it has been.
and we have already seen it, zfs had a really bad silent data corruption bug, while an experimental bcachefs has had absolutely no bug that resulted in data corruption.
This is extremely bad for zfs, since it should have shaken out all bugs decades ago, compared to a newcomer that has spotless track record since being merged. While being experimental. Meaning the author is not still completely satisfied with it, but it still has proven better than zfs
Last edited by varikonniemi; 16 December 2024, 06:52 PM.
Comment