Originally posted by kebabbert
View Post
Originally posted by www.zdnet.com/blog/storage/data-corruption-is-worse-than-you-know/191
View Post
*2* Firmware errors! Contact or sue hardware vendor!
*3* Firmware errors! Same!
*4* RAID Hardware logic & xfer errors! Same, but for RAID card/controller/cables!
*5* RAM bit magnetization due to high density! Use ECC RAM, position RAM correctly - follow MB manufacturer recommendation, enclose hardware in grounded cages correctly!
Where is LINUX EXTx CORRUPTING YOUR DATA HERE?
Is filesystem DESIGNED to withstand all those errors? Hell, NO.
It is like blaming Joe from Los Angeles in Fukushima crisis! He is american, and americans delivered parts to Nippon, so he is responsible for nuclear meltdown! He is NOT.
What is Joe responsible? To support his family and do it well! There is no point in giving every single Joe nuclear physician education to control the reactor either!
Projected to this "analysis", the file system should only do what filesystem should do - and do it well.
Detect file corruption - ext does data block and journal checksumming. Ntfs? I know only a way via hacks.
Prevent fragmentation - ext does this, designed with this priority. Ntfs does not do it and hence - speedups.
Correctly support operating system security requirements - ext does this.
Support for file requirements (timedate,name,reservations) - ext does this - and efficiently, unlike ntfs with MFT growing past 12-50% of partition size, without sane mechanism to change it.
Maintain consistency over power-down/cuts - ext does this and can do full data journaling, where ntfs does only metadata.
Badblocks - are not applicable to file system job, only in times of floppy disks. Nevertheless, ntfs tries to appy this in 21 century.
And ext is opensource! Which means it runs everywhere and has no licensing payments - this means ntfs is GARBAGE.
Ntfs is used ONLY and ONLY for legacy reasons. WHOLE MICROSOFT is built around LEGACY REASONS.
They flood and occupy market by price damping, set own standards and then they pretty much control EVERYONE.
THANK YOU CERN, FOR NOT USING MICROCRAP!
Originally posted by kebabbert
View Post
The guys threw HUGE testing at HUGE capacity arrays. Of course errors would show up, but from those, none were of linux or ext origin. Or you have something else to tell?
Originally posted by kebabbert
View Post
Originally posted by kebabbert
View Post
Of course I know, happens ECC is only available and built for server mainboards, although unofficially some asus boards seems to support it. In recent time, using ECC does start to make sense, with hi-density memory volumes going 4Gb and up.
But its manufacturer job to make sure component does not break within its designed usage scenario.
SATA has many SAS functions in it and is sufficient for desktop usage. SAS is too complex and has operating environment not normally seen on desktop, like 24/7 massively parallel data exchange with very limited error correction time, multi-disk and hotswap support. For example, you do not do SAS with 1000x 1Gb drives at home, you buy one 1Tb drive instead.
Cheetah is good drive. But too slow vs SSD, too noisy and unreliable vs normal 7,2k. The "non-recoverable bits" part is statistical mean product, many publish it, I guess it is legal requirement.
Comment