Quote Originally Posted by chithanh View Post
About error rates:
In the Google study they found that as memory chips aged, the number of errors would rise. This somewhat offset the higher number of errors due to higher capacity, as newer modules would typically have higher capacity.
I remember that one, it was a good read. It's important to distinguish between error rate, and number of errors. Two very different things. Even if error rate stays the same, you will have double the number of errors, if you double the RAM capacity. The combination of a higher capacity plus a higher rate, equals a whole lot more errors. High capacity + age = double whammy.

Quote Originally Posted by chithanh View Post
On data corruption:
RAID does not typically detect bit flip errors on hard disks, unless it employs some kind of data integrity check. Some expensive hardware RAID controllers do that, and ZFS RAID-Z does it too.
All but the cheapest consumer-grade RAID controllers do integrity checking. Even the Linux kernel software RAID runs regular consistency checks. If the kernel software raid cannot read a block from one disk, it will remap that block to a new location, and reconstruct it from the other RAID members. You can also force a manual consistency check at any time, with "echo check >> /sys/block/md0/sync_action" assuming md0 is your array.

You're correct though in that it doesn't detect disk errors on the fly (unless its accompanied by a read error), only during the regularly scheduled consistency check.