xfs does care as much about your data as ext4. Not at all.
It also sucks with small files (with everything below 10mb as small).
Yes, to me the point of storing files on hard disk, is that my data be safe. Better safe and slow, than fast and unsafe. If you can get 600MB/sec but there 0.0001% chance that you get corrupted data - or if you get 50MB/sec but your data will be completely guaranteed safe - which solution would you choose? I know I would have chosen that my data be totally safe. But your mileage may vary. Maybe you dont care if your data gets slowly corrupted as long as you get high speed?
Originally Posted by movieman
Some would consider you as a troll. Regarding the link about my performance problems with ZFS and Solaris, I solved the problem. I think it is only trolls that draw conclusions of ZFS+Solaris performance from a problematic setup? It is like "I tried to install beta Linux on a not supported platform, and it didnt succeed. Hence, Linux has severe install problem. Everybody, dont install Linux - it will only be install problems" - not very clever, eh?
But OTOH, didnt you "prove" that Linux is faster than Solaris by linking to an article where they migrated old 400MHz SUN machines to new intel dual core 2.8GHz Linux machines? If you believe that proves that Solaris is slower than Linux, then I understand that you believe ZFS is slow because you read about one person that had a problematic install.
It has never occurred to you that your conclusions may be wrong? I suggest you benchmark Linux on a 400MHz CPU, vs Solaris on a dual core 2.8GHz - would you then accept that Solaris is faster? No? Why not? But when Linux says the same thing, then it is ok? Maybe now, you see why it is wrong to reason like you do?
Barriers and JFS
Please make sure to enable barriers when testing ext3 (and others).
Also, please include JFS in the mix. Not always the fastest, but very low CPU usage and, IMO, generally faster "under load" than ext2/3/4.
Thank you, I second all of the above but especially for JFS on Linux("JFS2").
Originally Posted by 900Trophy
Last edited by fhj52; 01-27-2010 at 03:40 PM.
Reason: add ("JFS2")
opensolaris and others on the same hardware, solaris sucked:
Originally Posted by kebabbert
At least, learn how to reply properly. I missed this. You're usually just making yourself to look like an idiot. A known troll from osnews.com.
Originally Posted by kebabbert
Thanks for nice benchmark.
Last edited by kraftman; 01-27-2010 at 03:38 PM.
I am shocked that ext4 is being moved into default usage by the Linux distros. It is definitely *not* for any standard workstation or desktop. The new ext* features are not even meant to support desktop environments. They are for servers and the niche applications that actually have TB files. Yea, it is prbly good for Google, Yahoo, Tripod, NSF, genome companies and the like but those are most definitely not part of the average user base. Web servers don't typically serve multi-GB or TB size files!
All these tests indicate that ext3 is, overall, much better.
/methinks the tests are not showing the whole truth. In practice, ext3 is dog slow (like NTFS on windoz ...) and a real PITA to fsck. If ext4 is slower, this is a regression.
JFS on Linux("JFS2") does better. Of course somebody is not testing it( or ZFS ) so the benchmark proof is not available.
Speaking of benchmarks, the programs, like IOzone, are not simple tests that readily transform to simple bar charts representing a large "overview" of the entire process. Testing how a filesystem works means seeing how it works for specific programming calls(like fread, fopen, etc.). Such relatively important statistics for programmers mean little to the end user unless that user is using a specific application that s|he knows uses a lot of those calls.
It has been mentioned before but I'll repeat it. For this site, timed tests for loading applications, deleting large and small files and directories, mutliple access, different setups(E.g., journal on another disk), what is the CPU usage during those tasks, etc. would be more meaningful.
We really need some margin of error too(also requested before). I mean, really, who cares if btrfs is 1MB/s slower than ext4 if the margin of error is 2MB/s!?!
If the tests yield results within the margin of error for all the fs tested then the *real truth* is that they all perform the same for that test system and it does not matter which fs is chosen. It is only at that point that function calls might become important and such detailed results would be useful.
One author wisely pointed-out that the results presented are mostly important for people who run benchmarks all the time and have little applicability to real life. How about fixing that by spending the time to run (more) meaningful tests?
Not to beat on you or anything like that but one needs to realize that there is no such thing as "completely guaranteed safe" for reads|writes.
Originally Posted by kebabbert
Typically, safety of data is more dependent upon the hardware setup than the fs of choice.
In addition, writes of say a more realistically possible 200MB/s yield to error correction much better than writes of 50MB/s without the user pulling his|her hair out, .
When errors occur, and they do more than one thinks because one does not see the negative results(corrupted files' data), a filesystem that is faster should maintain higher transfer rates and interference with the computing process is minimized to an extent that the end-user probably won't see it.
We all want our data to be uncorrupted and the way that happens is to have error checking and correction, if necessary, performed at the time of READ|WRITE. Even so, there is no such thing as a zero uncorrected error rate. The best we can get is an "effectively near zero" rate. All filesystems have that or they are not filesystems that can be used. When one exceeds the near-zero rate, as ext4 might have done in some cases, additional measures are taken to put it back to near-zero uncorrected errors. That does not mean the fs must slow down but it might have that impact( as apparently it did with ext4 ).
So the bottom line for me is the fs that is capable of sustaining 200MBps ( max ) is a better choice than the one that does 50-100MBps ( max ) because both have near-zero uncorrected errors and when the errors occur, the 200MBps fs will(should) handle the problems faster than the 50MBps thereby providing a more pleasant and useful computing experience.
When errors happen and they cannot be corrected, speed doesn't matter when your data is LOST.
Originally Posted by fhj52
If the data is not lost and recoverable by say an fsck, one could argue a fast fs will be better. The thing is though, the speed of fsck is not related to the speed of the file system in normal use. The capacity to performance gap of the hardware forced designers to take the fsck performance into account only recently. Otherwise no effort was made to design the internal data structures in a way to make them quick to be repaired or rebuild by fsck.