Originally posted by NobodyXu
View Post
I wonder what the core usage would be like on compressed deduped BTRFS when 6 apps that use the 1GB GNOME overlay get that updated... will it decompress it, recompress it, find the duplicate compressed blocks, avoid the writes, so the only penalty vs status quo is decompressing 1TB and compressing 7TB for the copy operations at install time?
Anyhoo... that's doable. The tradeoff at runtime is between the extra disk and RAM usage for the metadata of the per-app combined trees, vs the performance/complexity cost for a Union filesystem.
It's a bit of a mess when you go to update the app or one of the dependencies though, as you'll need to extract only the RW tree changes, set them aside, tear down the combined tree, build it back up, then re-apply the previously set aside changes. Or you could keep a database of where each file came from, and manage them individually within the writeable tree.
There's various optimisations that would be possible with each FS, ie with ext4 hardlinks you could update the extracted inodes, and all copies would be updated in place like magic... which is normally a problem but in this case it would be awesome lol. Reflinks would end up becoming COW copies if you tried that, but once you replaced all the child reflinks the old COW would get orphaned and garbage collected, and of course btrfs/zfs with native dedupe.
So... just write a tool to manage the app runtime/chroot/container trees in place. Make sure to take advantage of the benefits of each supported filesystem, and I'm sure it would get adopted. There would be meaningful runtime performance benefits for file metadata-intensive applications so long as RAM wasn't in short supply.
Or just keep using a Union filesystem to do that heavy lifting. Literally job done.
Comment