Originally posted by Stellarwind
View Post
For torrents in my understanding it is fairly standard among clients that they write out downloaded data in 16KB chunks or a multiple of that. So setting the ZFS blocksize to 16KB on that dataset should eliminate the RMW cycle and some fragmentation.
Now this is not as efficient as much bigger blocksizes both in terms of metadata and compression. Luckily a lot of clients support separate download directories for temporary files, so the best method would be to make a separate dataset for the tempdir with 16KB blocksize. Then when the client moves the tempfile to final place it will also defragment the data.
Comment