Couldn't care less. Invest your time in btrfs instead of this outdated piece of crap ZFS.
Announcement
Collapse
No announcement yet.
ZFS On Linux Runs Into A Snag With Linux 5.0
Collapse
X
-
I accept the explanation as plausible. I ask though, what's the probability that ZoL devs simply did not notice depreciation 10 years a go. Does all deprecated stuff gets re-quoted version after version or what? That code is quite a handful in itself, without adding Linux kernel to it.
Comment
-
Originally posted by lichtenstein View Post
Ditto (well, less than 5 but still). I have it on my mini-server mirroring two external usb 4TB drives. I explicitly chose to run it like that instead of going for an external case that mirrors the drives itself. With its checksumming btrfs provides bitrot protection which is why I use it. It's been very stable and fast and I've had no issue with it.
BTW, I'm have experience with ZFS (on a separate freebsd machine) so I'm aware it provides similar protection but it's a mini-server and ZFS doesn't like "mini" - it needs (lots of) ram to perform well. I could use it on my desktop (ext4 atm) but in order to reap the benefits I really would need more than one drive - otherwise zfs/btrfs can report but not correct errors.
What requires lotsa RAM? Using deduplication and L2ARC. That's it.
- Likes 4
Comment
-
aht0, thanks for clarifying. I've never used dedup.
In any case, I'm saying that btrfs is perfectly fine for single disk and for my use case, which is a simple mirror. I've had no experience with raid5, etc. Another reason why I went with it was/is because it's linux native and I expected for it to be better supported and maintained (on linux) than ZFS. Looking at the current issue (and, according to previous posters, similar issues in the past), it looks like it was the right decision.
- Likes 6
Comment
-
Originally posted by lichtenstein View Postaht0, thanks for clarifying. I've never used dedup.
In any case, I'm saying that btrfs is perfectly fine for single disk and for my use case, which is a simple mirror. I've had no experience with raid5, etc. Another reason why I went with it was/is because it's linux native and I expected for it to be better supported and maintained (on linux) than ZFS. Looking at the current issue (and, according to previous posters, similar issues in the past), it looks like it was the right decision.
You can see the memory usage while running Plasma 5.
Not a lot to be afraid of RAM-wise. Especially in casual usageLast edited by aht0; 11 January 2019, 09:20 AM.
- Likes 2
Comment
-
Originally posted by aht0 View PostI think it as a case of upstream breaking the internal API's yet again, not caring in the least how it would affect downstream. Happens all the time. Shit breaks because upstream dev thinks it good to do some minor random change and like a "butterfly effect" bunch of stuff gets broken suddenly downstream.
Does "Mr or Ms "upstream dev" cares? Not in the least.
Their pact with downstream has always been clear about what is stable and what is not. This is not a breach of that pact.
This thing was removed as the last in-kernel user was removed https://marc.info/?l=linux-kernel&m=154689892914091
Mr "2.nd in command after Linus" seems to be guided here by his own preconceptions rather than anything else - I checked follow-up mails and reached that conclusion. Biggest problem for him seems to be that ZFS originated from Solaris (NIH).
and the follow up "Sorry, no, we do not keep symbols exported for no in-kernel users."
Because while I personally don't like this specific case (I'm no ZFS hater), I kind of understand that they have to be inflexible on basic rules and can't ignore license incompatibility.
If they start making special cases based on personal sympathy then it all falls apart pretty quick.
Ignoring license incompatibility is also a very bad thing to do, especially since Oracle is the copyright holder and can go do some old-school legal trolling if they feel like there is profit in doing so.
Not that I actually particularly care, more power to FreeBSD.
- Likes 4
Comment
-
Originally posted by LaeMing View Postwhere are their efforts?
(Whining that others aren't doing what you want without interest or recompense doesn't count as effort, shocking as it may seem!)
LLNL has contributed to linux plenty, EDAC, lustre, ext4 are three examples.
Sun gave you NFS.
Oracle contributes quite a fair bit to linux, even if all they seem interested in is their UEK. They started btrfs.
I believe Proxmox contributes to debian or ubuntu.Last edited by some_canuck; 11 January 2019, 09:51 AM.
- Likes 2
Comment
-
Originally posted by GruenSein View PostThe question "So why would we do extra work to get their code to work properly?" can be answered quite easily. It is because many people have wanted to use ZFS for a long time and it is one of the most advanced FSs for its purpose.
People often think opensource is "I work for free for you", but it is not. What is given free is the finished product, if it does not work with your stuff is your own problem.
It is not like the Linux kernel crew is asked to do ZOL a favor out of the goodness of their hearts.
Also, I don't get why they don't simply restore the symbols that ZOL uses and mark them "deprecated" or whatever until the software depending on it can adapt (assuming there is an actual reason to remove them at all).
Still, this is blown a bit out of proportion. Kernel changes to internal API aren't a new thing, people using ZOL or any other out-of-tree module don't usually expect it to support the new Linux kernel version on release day as there WILL BE breakage, the overwhelming majority of the users will be using a distro with a LTS kernel where ZOL will work fine for years still.
- Likes 4
Comment
-
Originally posted by aht0 View PostOnly using certain ZFS functionality makes it memory hungry. Generally whine about ZFS ram requirements is just FUD. For dead simple large file storage ram requirements are minuscule, you can get by using 768Mb. Seriously.
What requires lotsa RAM? Using deduplication and L2ARC. That's it.
For basic home server/NAS use you can get away without a large RAM cache fine.
I have a strong suspicion that most FreeNAS people are heavily overspeccing their systems the same way that gamers add RGB lights and fans.
- Likes 2
Comment
-
"My tolerance for ZFS is pretty non-existant. Sun explicitly did not want their code to work on Linux, so why would we do extra work to get their code to work properly?"
What someone who doesn't have to deal with real users and real world workloads might say. For at least a couple of my projects, ZFS is by far the best solution. Disappointed to hear stuff like this uttered by high level kernel developers.
- Likes 4
Comment
Comment