Originally posted by sniglom
View Post
Announcement
Collapse
No announcement yet.
Another Look At The Bcachefs Performance on Linux 6.7
Collapse
X
-
-
Originally posted by F.Ultra View Post
a) yes they do, they have e.g different sensors for vibration and what not that does not exist on normal consumer grade hdds, one of the reason why the MTBF is so much higher on these than on the consumer versions.
b) you do realise that this number indicates that they are fully aware that there is a chance that the same read does not return the exact same data every time? Now that is a very low chance (since again this is a server grade hdd) but it is still there and it is a number they have figured out what it is (since their other drivers have different numbers).
c) data stored on hdds are via magnetic fields so unplugging the drive for 10-20 years have a high chance of altering that magnetic field. This is very simple physics. Not sure why you think constantly having to rewriting the whole partition is somehow a better solution than simply adding checksums ala zfs, btrfs and bachefs, but hey you do you.
I have been driving a car for 31 years now and have never ever had a need for either a seatbelt nor an air-bag, going by your logic I should now rant on car forums that car crashes is a myth...
a) For all magnetic disks it is very important to avoid too much vibration because vibration disturbs the very small air gap between platter and head and therefore results in very bad reliability of recording and playback,
b) Of course the probability of an erroneous operation is very low for consumer and server grade disks. I would not rely on the numbers given because measuring it is very time consuming and costly. Just use self repairing Raid-1 and forget that kind of problem.
c) Your statement regarding an effect of unplugging is just plain wrong. Yes, it is Physics and I hold a degree in experimental physics :-D
I did not recommend "constantly having to rewriting the whole partition". But doing it once per a few years does bring the advantages I wrote.
Your very last sentence is so plain silly that I won't answer any more to your posts.
Leave a comment:
-
Originally posted by LinAdmin View Post
Ad
a) That ton of stuff does not exist. All disk manufacturers that are left today know how to make a good magnetic coating.
b) Are you kidding? No manufacturer would survive if they would not take care of that.
c) Regarding bitrot does not depend on the platter resting or rotating. It's temperature might have more influence. The danger of this effect really comes from the situation that a huge part of the data gets written once and rests on the platter for decades.
Writing the whole partition as I described remedies that problem.
b) you do realise that this number indicates that they are fully aware that there is a chance that the same read does not return the exact same data every time? Now that is a very low chance (since again this is a server grade hdd) but it is still there and it is a number they have figured out what it is (since their other drivers have different numbers).
c) data stored on hdds are via magnetic fields so unplugging the drive for 10-20 years have a high chance of altering that magnetic field. This is very simple physics. Not sure why you think constantly having to rewriting the whole partition is somehow a better solution than simply adding checksums ala zfs, btrfs and bachefs, but hey you do you.
I have been driving a car for 31 years now and have never ever had a need for either a seatbelt nor an air-bag, going by your logic I should now rant on car forums that car crashes is a myth...
- Likes 1
Leave a comment:
-
Originally posted by F.Ultra View Post
a) WD Red Pro drives (I have two 8TB of them in my deskop as well as /home) are server grade drives and they have tons of stuff added to avoid bit rot ....
b) The very fact that they even state such a number shows that WD knows about bitrot.
c) However as a test you should power them off and put them in a cupboard for 10-20 years and then power them up and redo the scans again to see what have happened.
a) That ton of stuff does not exist. All disk manufacturers that are left today know how to make a good magnetic coating.
b) Are you kidding? No manufacturer would survive if they would not take care of that.
c) Regarding bitrot does not depend on the platter resting or rotating. It's temperature might have more influence. The danger of this effect really comes from the situation that a huge part of the data gets written once and rests on the platter for decades.
Writing the whole partition as I described remedies that problem.
- Likes 1
Leave a comment:
-
Originally posted by LinAdmin View Post
I have a bunch of WD-red 8TB disks running in my Server since 2018 (5 years ago) using BtrFS which when scrubbing would never show bitrot. (There has never been any such sign and all data has been perfect).
Since many files are rarely written, I have now unmounted the BtrFS partitions and then refreshed all sectors of each partition using my ecp binary (comparable to cp):
ecp -v /dev/sdb3/ /dev/sdb3/
This took 30 hours per partition and after mounting I ran another scrubbing without errors.
However as a test you should power them off and put them in a cupboard for 10-20 years and then power them up and redo the scans again to see what have happened.
Originally posted by andyprough View Post
That's not an incident report about data loss due to bitrot with ext4, that's just an article on the advantages of COW, specifically cheerleading the use of btrfs. I probably used btrfs before nearly anyone else here, as I was a dedicated SuSE Professional user from the early 2000's through about 2018 while the rest of you goobers were using Ubuntu and Windows and Hannah Montana Linux and so forth.
bitrot, if it's real (unlikely) and not just data loss due to some scriptkiddie setting up raid based on google searches (likely) or using crap cables (highly likely), is just one more of the many reasons to have an ironclad backup plan. It's no reason to avoid ext4, which is one of the most performant desktop file systems available to us.Last edited by F.Ultra; 04 December 2023, 04:09 PM.
- Likes 2
Leave a comment:
-
Originally posted by flower View Post
it's not only bitrot. I once had a sata cable going bad after a year. I only noticed it through zfs checksum errors.
Bad sata cables and checksum errors are so common that this is always the first advice someone get when asking for checksum errors in zfs reddit
Leave a comment:
-
Originally posted by [BF.Ultra[/B] View Post]
Zero of those articles can be serious. Bitrot is a physical phenomenon that is no mystery at all, bits on the drive are not carved in stone (and even things carved in stone experiences bitrot eventually). [...] if people are now claiming that there have never been unrecoverable files on storage media then I have more than one bridge to sell to those.
Since many files are rarely written, I have now unmounted the BtrFS partitions and then refreshed all sectors of each partition using my ecp binary (comparable to cp):
ecp -v /dev/sdb3/ /dev/sdb3/
This took 30 hours per partition and after mounting I ran another scrubbing without errors.
Leave a comment:
-
Originally posted by andyprough View PostThat's not an incident report about data loss due to bitrot with ext4, that's just an article on the advantages of COW,
I'm happy that finally you got it.
Originally posted by andyprough View PostI probably used btrfs before nearly anyone else here, as I was a dedicated SuSE Professional user from the early 2000's through about 2018 while the rest of you goobers were using Ubuntu and Windows and Hannah Montana Linux and so forth.
I used to compile it by myself into the kernel long before it was officially merged.
Originally posted by andyprough View Postbitrot, if it's real (unlikely) and not just data loss due to some scriptkiddie setting up raid based on google searches (likely) or using crap cables (highly likely), is just one more of the many reasons to have an ironclad backup plan. It's no reason to avoid ext4, which is one of the most performant desktop file systems available to us.
but at this point, if you still haven't got the point of the whole discussion I think the fault is on you, not me.
- Likes 2
Leave a comment:
-
Originally posted by vermaden View PostWhy no ZFS also included in the tests?
Especially knowing that the tests were made on Ubuntu where ZFS is available ...
BtrFS, ZFS and BcacheFS have the very important crc-checks of all data written.
Leave a comment:
-
Originally posted by cynic View Postdon't know what search engine you're using, but one of the first results I get on the topic is this interesting article from 2014: https://arstechnica.com/information-...n-filesystems/
bitrot, if it's real (unlikely) and not just data loss due to some scriptkiddie setting up raid based on google searches (likely) or using crap cables (highly likely), is just one more of the many reasons to have an ironclad backup plan. It's no reason to avoid ext4, which is one of the most performant desktop file systems available to us.
Leave a comment:
Leave a comment: