Originally posted by DanL
View Post
Announcement
Collapse
No announcement yet.
ZFS On Linux 0.8.3 Released With Many Fixes
Collapse
X
-
Originally posted by pkese View PostWhy are the debates always so emotional, whenever someone mentions ZFS (or ZFS and heavensforbid btrfs)?
E.g. no one is yelling at each other when there's debate about ext4 and XFS...
P.S. I'm not trying to understand the factual and technical part of the story. I'm trying to understand the emotional part. It seems as if someone is getting hurt or something?
People using btrfs or ZFS are aware to some extent and usually neckbeards of some kind so they are or at least were proud of their choice. If the filesystem fails them it's not just a failure, it's a "you let me down, son" situation where their pride and their self-assessed value is also taking a hit, and they react in the only logical way, hissing like pussies and hating hard the filesystem to preserve themselves from further emotional harm.
I mean how are you supposed to react to massive disappointment?
Comment
-
Originally posted by starshipeleven View PostIs there people actually trusting a filesystem tool to set up a shared folder?
Originally posted by starshipeleven View PostI guess it's ok for basic stuff, but I'm not trusting a middle man to tame NFS and Samba for what I usually use them for.
Originally posted by starshipeleven View PostYeah, but it's a virtualization appliance distro (i.e. it's a host for KVM virtual machines) so its abilities to share folders are somewhat limited.
I mean OK it is Debian so you can just use SSH to do things, but I wouldn't mind something that kinda looks like FreeNAS or OpenMediaVault web interface for setting up shares and stuff, it's an appliance after all.
Comment
-
Originally posted by numacross View PostHave you tried using Proxmox and it's web interface? Many things can be done from it without the need of touching SSH. It's a terrific tool that goes under the radar a lot, sadly.
It's supposed to be used for businness-y environments so it has a bunch of stuff that is very cool about KVM clustering and setting up a distributed filesystem (Gluster?) in your cluster, but I quite frankly have no use for that as I have a single system. Its web interface for VM management is OK but it's not better than virt-manager (especially the later ones that allow me to edit the config file directly if I really need to), and I know how to use virt-manager already so I didn't feel like learning a whole new system.
For KVM virtualization server thing I settled for OpenSUSE (which is what I'm most familiar and aligned with at this point in time) and all the VM management happens through my PC's local virt-manager GUI that acts as a frontend for the server's libvirt, and any shared folder setup was done mostly with Yast's configuration tools (as it shows a ncurses GUI over SSH, it's good enough GUI for configuration jobs) and then adjusted a little by hand for tunables and such.
- Likes 1
Comment
-
Originally posted by lu_tze View PostThe difference is, that btrfs didn't eat my data yet; but zfs did cause several issues already.
If you can't figure it out you can recover it with an Ubuntu 19.10 USB and zpool import it.
I've run root on ZFS on Linux for over 4 years. (Gentoo) I know ZFS and I know what I'm doing tho.. Dem dragons be too scared of me maybe..Last edited by k1e0x; 24 January 2020, 08:24 PM.
- Likes 3
Comment
-
https://askubuntu.com/questions/9179...d-after-reboot
Yesterday betapool started getting errors. I don't know the root cause and to be honest I'm quite disappointed that ZFS did not prove to be plug and play for me. When I realized that something was going wrong, I did sudo zpool status -x and got two errors in betapool: one referred to a files in the pool, another to <metadata>. I tried to do some diagnostics, but most of my commands for that pool just hung with "D" (uninterruptible IO wait) in ps aux. sudo reboot hung as well, so I did a hard reset.
No, let me explain what I think is going on here with issues like this. They are actually pretty common and it's a pitfall I think a lot of people trying it out might run into..
A) The user is using a VM or raid controller with a write cache. ZFS will go ape shit with write caches it does not know about because it expects to be able to do atomic writes. If you use ZFS make -SURE- your write cache is OFF on a VM and make -SURE- your raid controller card is set to JBOD mode. People think write caches are FAST! So vmware and everyone else often enables them by default. In reality ZFS is just fine with out it and can even use a SLOG device to do this itself but it has to know if it's operations actually physically are on disk or not. If a cache lied to it.. it will shit the bed so make sure that cache is OFF. (ZFS has it's own ARC cache that is way better anyhow)
B) If the drives are good, then the only other possibility is the user has memory corruption. A bad block in memory will produce a bad write with any filesystem, ZFS isn't special here.Last edited by k1e0x; 24 January 2020, 09:24 PM.
- Likes 5
Comment
-
Originally posted by robojerk View PostI remember a ton of posts saying to NOT use btrfs if you use certain features (like striping) as it can just lose data and zfs is the only option for those features.
Comment
-
Originally posted by starshipeleven View PostWait, what has a filesystem to do with SELinux, NFS and SMB support.
Comment
Comment