Originally posted by jrch2k8
View Post
Most of the time, I keep the OS at its defaults. One of the reasons I use FreeBSD because the defaults, most of the time, are good. Hence it makes sense to publish tests with out of the box configurations. A lot of people will use the systems on defaults.
However, sometimes, I have time and interest to do run tests and benchmarks. That’s why I can tell you that what you copied here, and it’s usually the first result on Google for "zfs tuning", is wrong.
For starters, ashift isn’t related to HDD vs. SSD but the sector size of the disk.
The recordsize=8K for PostgreSQL is one of the worst advice I’ve ever read on the Internet over the decades. I know it’s not your idea because I’ve hardly ever read a ZFS tuning guide that didn’t recommend it, which makes me wonder how many people copy-paste "tuning" without serious testing.
I have PostgreSQL databases ranging from 10G to 100G in size, and I couldn’t find a single case when the 8k recordsize didn’t perform worse than the 128k. And if you turn on the compression, even the theoretical basis that argues why the 8k would be better is gone.
In general, the smaller the recordsize, the worse ZFS’ performance gets. I tried using various recordsizes under PostgreSQL, exim’s pool, various logs and a millions of image files. In my tests using real data, lowering the recordsize always made things worse regardless of the use case.
The most important (and maybe the only) lesson I’ve learned from arbitrary tuning guides was to not copy-paste config lines without either understanding or testing what they meant to do.
Leave a comment: