Originally posted by mercutio
View Post
ZFS File-System Tests On The Linux 3.10 Kernel
Collapse
X
-
Originally posted by Chewi View PostWhat's wrong with it? Not trolling, just curious.
If you run out of space in the snapshot, the whole snapshot is lost forever, so if you actually want to keep it for longer time, you need to provision at least as much space as the base volume has.
(there were also other problems but I recall hearing that they have been fixed)
But with all the problems with them, I still use them, only for very specific tasks, under detailed supervision and so on. The btrfs snapshots are "plug and play" in comparison.
Comment
-
-
The situation isn't really ideal at the moment.
Ext4 is a great fast filesystem, however it is missing data integrity checks and the self healing mechanism of ZFS. I think this is a must have today.
ZFS is rock stable and brings all you want for a modern filesystem, but it is slow as hell on Linux and the development is pretty much dead.
BTRFS brings most of the features ZFS brings and some more I really like, like the automatic reallocation of hot data. However last time I used it, it seamed not ready for production.
I'm aware that what filesystem you use heavily depends on what use cases you have. I currently run ZFS on all my setups because stability and integrity are more important to me than performance. I would switch back to BTRFS if the problems I faced last time are fixed now. Have to make another test soon.
Comment
-
-
Originally posted by ZeroPointEnergy View PostExt4 is a great fast filesystem, however it is missing data integrity checks and the self healing mechanism of ZFS. I think this is a must have today.
ZFS is rock stable and brings all you want for a modern filesystem, but it is slow as hell on Linux and the development is pretty much dead.
BTRFS brings most of the features ZFS brings and some more I really like, like the automatic reallocation of hot data. However last time I used it, it seamed not ready for production.
Comment
-
-
Originally posted by PuckPoltergeist View PostSelf-Healing like creating new? Or is this now fixed in ZFS?
A filesystem where you can't delete files if it is full, I wouldn't call rock stable. And this was observed on Solaris, not OpenSolaris or Linux. I'm amused about the ZFS-hype still around.
Comment
-
-
By default, fs_mark writes a bunch of 0's to a file in a 16k chunks, calls close followed by an fsync followed by a mkdir call.
Code:[pid 13710] write(5, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 16384) = 16384 <0.000020> [pid 13710] fsync(5) = 0 <0.033889> [pid 13710] close(5) = 0 <0.000005> [pid 13710] mkdir("./", 0777) = -1 EEXIST (File exists) <0.000005>
A 5-disk raidz1 pool:
Read Sample:
Code:dd if=sample1.mkv of=/dev/null bs=1M 4085+1 records in 4085+1 records out 4283797121 bytes (4.3 GB) copied, 15.3844 s, 278 MB/s
Code:time dd if=/root/sample2.mkv of=test bs=1M; time sync; 9428+1 records in 9428+1 records out 9886602935 bytes (9.9 GB) copied, 35.6332 s, 277 MB/s real 0m35.635s user 0m0.010s sys 0m2.666s real 0m2.665s user 0m0.000s sys 0m0.077s
Comment
-
-
Originally posted by ZeroPointEnergy View PostWhat are you talking about? Can you link some more information, that would be really helpful.
For not being able to delete files on a filled filesystem, that was a in-house problem.
This has nothing to do with hype, I just emphasize data integrity and i can't possibly be aware of every last bug. And what alternative is there? Last time i tried BTRFS you could not even mount by label from grub, so if a disk fails in a RAID you can't even boot anymore. That's not production ready for me and i don't even have to read a bug tracker to notice that.
Comment
-
-
Originally posted by PuckPoltergeist View PostDidn't tried by label but uuid worked for me. But label should work to. If it doesn't it's a bug that needs to be reported.
Comment
-
Comment