How can I know if you are telling the truth now? If you can not post a simple link again, then it looks like FUD. Dont you think?
I know you have showed links earlier, but none of the links where relevant. In one case you showed a link, where they compared an old 800MHz SPARC Solaris server, vs a dual core Intel 2.4GHz Linux, and your link claimed that Linux is faster than Solaris. You thought that link was good and relevant. I say your link was not relevant. If you install Linux and Solaris on the same hardware - that is interesting and relevant - when we discuss performance of Solaris vs Linux. Thus, your links are not good. This link you showed earlier, can you repost it, or is the link as meaningless as your earlier links?
That's the problem. You've got to have copies to repair file system.
1) ext4 has no "copies=2" command, ext4 needs another disk to be able fetch missing data. ext4 must have two disks, or more. ZFS can have a single disk.
2) ext4 has no way of detecting corrupted data or repair it. ext4 does parity calcuations, but no checksum for detecting corrupted data.
So I dont see it as a problem that ZFS can repair corrupted data, whereas ext4 can not. What is the problem that ZFS can repair corrupted data? Can you explain again?
The same can be done with Ext4.
There are also bugs in zfs, so your data is not completely safe with it.
Many people believe that ZFS is the safest alternative on the market today. Not a completely safe, but the safest. ext4 has no data corruption detection at all, so it is not safe at all. I fail to see how "ext4 is near completely safe" - can you describe how, or are FUDing again?
Those who want to make it such safe just use proper system configuration and copies (raid). It's not that same file system has to care about data safeness. The same about detection and recovery.
That's no true and I showed you this one, too. Patches were even sent by Oracle btw.
The one you gave.
You spread FUD about Linux,
but I say true things about slowlaris. Links are meaningless in this case which is obvious, but marketshare, popularity and Oracle actions clearly shows slowlaris is going to end.
Larry has said officially that he is increasing resources much more than Sun ever had. There will be more developers on Solaris and SPARC cpu, than Sun ever had.
In other words, Larry says he will bet heavily on Solaris. You say he is going to kill Solaris. This post shows that you are not correct on this, Solaris will not be killed. Do you agree that Larry is not interested in killing Solaris? Why would Larry say that “…Solaris is overwhelmingly the best open systems operating system on the planet.” if he wants to kill Solaris? No, you are not correct on this. I have showed you numerous links where Larry praises Solaris, and still you say that Larry are going to kill Solaris. Why? Isnt what you do, pure FUD and Trolling?
He says many stupid things. How many unixes are out there, today?
Such old benchmarks with unstable btrfs version doesn't matter at all.
64bit is simply enough and will be enough for long time. CERN, Google, Facebook runs Linux not slowlaris, so 64bit is and will be (nothing suggests they plan to replace Linux by system that has 30% slower binaries) enough.
"Having conducted testing and analysis of ZFS, it is felt that the combination of ZFS and Solaris solves the critical data integrity issues that have been seen with other approaches. They feel the problem has been solved completely with the use of this technology. There is currently about one Petabyte of Thumper storage deployed across Tier1 and Tier2 sites. That number is expected to rise to approximately four Petabytes by the end of this summer."
No, that's simply not true and this sounds like sun's FUD. Red Hat employer was mistaken and if you read discussion you should be aware of this.
If CERN uses 64 bits, then CERN needs to split the data so that the data does not go beyond 2^64 bits. So CERN will have some data pools, with 2^64 bits, and another data pool with 2^64 bits, etc. Thus, there will be several data pools, and it will be difficult to examine all data in different pools. It is then better to use one single data pool, that holds all data because then CERN can run all calculations, without having to split them up. Thus, it is true that BTRFS will not be able to handle big scenarios - which means that BTRFS needs to be redesigned to use more than 64 bits. So, yes, I speak true.
Regarding the RedHat developer, he said that BTRFS has some issues. That is true, and I do not lie nor FUD about this. Do you want to see the post where he writes this?