Originally posted by make_adobe_on_Linux!
View Post
Hardware:
- if possible always prefer ECC over fast ram, all technicalities aside it protects you from getting trash written from RAM to Pools.
- ZFS use a lot of RAM by default(unless fine tuned), so if you don't wanna spend hours with arcstat just set you minimum RAM to 32GB for a regular home usage
- If you wanna encrypt get a CPU at least modern enough for 4 cores with AES-NI, Zen based CPU's are great choices.
- If you plan to use NVME in as disk not as ZIL/SLOG drives, set you minimum requirement to at least ThreadRipper or X299 if you love to waste money. ZFS can handle RAID on NVME on any system and won't require any sort of BIOS or dongle extras but regular desktop boards lacks PCI-e bandwith, so you will always be limited to 1 drive speed or worse depending the mobo.
- Never use ZFS on RAID 0 or with a single drive because basically you will get all the downsides of CoW with literally 0 benefit.
- know you data, depending on your data the performance can be great or unbearable trash.
- not all properties on a pool are for you, just use what you need.
- never ever ever use ZFS on a bare pool, volumes exist for a reason and if you don't use them then you should consider why are you using ZFS to start with.
- Compression is a great tool but a deceptive one, if you have a volume with lots of office files or highly compressible files(let say your documents folder ) compression will work great and boost you transfer rate because you save lots of bandwidth but if you have lots of non-compressible files like videos or already compressed files enabling compression on that volume will skyrocket you latency with 0 bandwidth savings, aka you are wasting CPU cycles for no reason.
- Deduplication is a nice feature but it requires huge amounts of RAM/CPU and can make latency spike harshly if misused, so keep in mind it should only be used on volumes with lots of small files that you know can be redundant and compressible(like a samba share for people to save office files or the likes) or a volume with big binaries that you know have lot in common like ISOs, Virtual Machine Drives(in case you share several instances of the same OS),etc.
- Large Dnodes should be use on auto always unless you have a very specific reason not to(like Solaris compatibility).
- Atime=off Relatime=on, this one don't need much explanation.
- recordsize is a tricky one, my rule is 16k for certain databases, 128k for general bunch of small compressible files and 1M for volumes where most of your files are bigger than 1M(like videos/iso, etc), if you don't do this right you will get low performance and/or very high fragmentation
- Sync goes always standard unless you really know what you are doing.
- xattr=sa, acltype=posixacl and aclinherit=(up to you) worked great for me through the years but as always check the documentation first.
- Encryption require testing because depending on you data/compression/recordsize could be great or a slow dog, so do some tests before blindly go and start encrypting.
FAT WARNING:
- Never use ZFS on RAID 0 or with a single drive because basically you will get all the downsides of CoW with literally 0 benefit.
- Most changes made to a Volume affect only new/modified files be careful to make most of your changes before adding data to Volumes/Pools/etc.
- A lot of fine tuning can be done through kernel module parameters as well but is way more complex than a simple post can handle.
Leave a comment: