Announcement

Collapse
No announcement yet.

Another Look At The Bcachefs Performance on Linux 6.7

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #71
    Originally posted by F.Ultra View Post

    Zero of those articles can be serious. Bitrot is a physical phenomenon that is no mystery at all, bits on the drive are not carved in stone (and even things carved in stone experiences bitrot eventually). There is nothing in other filesystems, like EXT4, that detects bitrot. Ofc bitrot is extremely rare since HDDs and SSDs don't store bits as such but instead use various forms of error correcting codes but if people are now claiming that there have never been unrecoverable files on storage media then I have more than one bridge to sell to those.
    it's not only bitrot. I once had a sata cable going bad after a year. I only noticed it through zfs checksum errors.

    Bad sata cables and checksum errors are so common that this is always the first advice someone get when asking for checksum errors in zfs reddit

    Comment


    • #72
      Originally posted by andyprough View Post

      I don't believe I was showing off, not sure how that would work when I'm asking you a question. You seem to be the one trying to show off some sort of superior knowledge, but my probing question has apparently revealed that you do, in fact, have zero experience with the problem you are scare-mongering about. As expected. Since you are such an expert on 'bitrot and ext4' searches, I'm sure you realize that there are no reports of it actually occurring with ext4. There are quite a few articles questioning whether bitrot is a real phenomenon at all, or just a conspiracy.
      after reading that "bitrot is a conspiracy" I shouldn't be here wasting my time for you.
      still, today I'm good and here I am.

      bitrot occurs for several reasons and with today large storage is almost inevitable.
      as I already wrote you, it is not related to ext4, nor any other particular filesystem implementation, it is a physical phenomenon that regards memory storage.

      ext4 is not able to detect bitrot because don't make a checksum of data (only on metadata) so your data are slowly and silently degrading and degraded data are probably going in your backups, overwriting good data.

      ZFS and btrfs on the contrary do data checksumming and can detect (and if you have redundancy, fix) corruptions.

      don't know what search engine you're using, but one of the first results I get on the topic is this interesting article from 2014: https://arstechnica.com/information-...n-filesystems/

      Comment


      • #73
        Originally posted by cynic View Post
        don't know what search engine you're using, but one of the first results I get on the topic is this interesting article from 2014: https://arstechnica.com/information-...n-filesystems/
        That's not an incident report about data loss due to bitrot with ext4, that's just an article on the advantages of COW, specifically cheerleading the use of btrfs. I probably used btrfs before nearly anyone else here, as I was a dedicated SuSE Professional user from the early 2000's through about 2018 while the rest of you goobers were using Ubuntu and Windows and Hannah Montana Linux and so forth.

        bitrot, if it's real (unlikely) and not just data loss due to some scriptkiddie setting up raid based on google searches (likely) or using crap cables (highly likely), is just one more of the many reasons to have an ironclad backup plan. It's no reason to avoid ext4, which is one of the most performant desktop file systems available to us.

        Comment


        • #74
          Originally posted by vermaden View Post
          Why no ZFS also included in the tests?

          Especially knowing that the tests were made on Ubuntu where ZFS is available ...
          Yes, I hope that in the next tests it will be included.
          BtrFS, ZFS and BcacheFS have the very important crc-checks of all data written.

          Comment


          • #75
            Originally posted by andyprough View Post
            That's not an incident report about data loss due to bitrot with ext4, that's just an article on the advantages of COW,
            oh good morning! I wrote you two times that the bitrot is not an issue of ext4 or any other fs.
            I'm happy that finally you got it.

            Originally posted by andyprough View Post
            I probably used btrfs before nearly anyone else here, as I was a dedicated SuSE Professional user from the early 2000's through about 2018 while the rest of you goobers were using Ubuntu and Windows and Hannah Montana Linux and so forth.
            no, noob, you didn't used it before me.
            I used to compile it by myself into the kernel long before it was officially merged.

            Originally posted by andyprough View Post
            bitrot, if it's real (unlikely) and not just data loss due to some scriptkiddie setting up raid based on google searches (likely) or using crap cables (highly likely), is just one more of the many reasons to have an ironclad backup plan. It's no reason to avoid ext4, which is one of the most performant desktop file systems available to us.
            look, my English is crappy, I know it.

            but at this point, if you still haven't got the point of the whole discussion I think the fault is on you, not me.

            Comment


            • #76
              Originally posted by [B
              F.Ultra[/B] View Post​]
              Zero of those articles can be serious. Bitrot is a physical phenomenon that is no mystery at all, bits on the drive are not carved in stone (and even things carved in stone experiences bitrot eventually). [...] if people are now claiming that there have never been unrecoverable files on storage media then I have more than one bridge to sell to those.​
              I have a bunch of WD-red 8TB disks running in my Server since 2018 (5 years ago) using BtrFS which when scrubbing would never show bitrot. (There has never been any such sign and all data has been perfect).

              Since many files are rarely written, I have now unmounted the BtrFS partitions and then refreshed all sectors of each partition using my ecp binary (comparable to cp):

              ecp -v /dev/sdb3/ /dev/sdb3/

              This took 30 hours per partition and after mounting I ran another scrubbing without errors.

              Comment


              • #77
                Originally posted by flower View Post

                it's not only bitrot. I once had a sata cable going bad after a year. I only noticed it through zfs checksum errors.

                Bad sata cables and checksum errors are so common that this is always the first advice someone get when asking for checksum errors in zfs reddit
                that too yes, and bad sata/raid cards, if I remember they had a faulty raid card at CERN that they discovered using zfs. Good luck running something other than zfs, btrfs or bcachefs in any of those conditions

                Comment


                • #78
                  Originally posted by LinAdmin View Post

                  I have a bunch of WD-red 8TB disks running in my Server since 2018 (5 years ago) using BtrFS which when scrubbing would never show bitrot. (There has never been any such sign and all data has been perfect).

                  Since many files are rarely written, I have now unmounted the BtrFS partitions and then refreshed all sectors of each partition using my ecp binary (comparable to cp):

                  ecp -v /dev/sdb3/ /dev/sdb3/

                  This took 30 hours per partition and after mounting I ran another scrubbing without errors.
                  WD Red Pro drives (I have two 8TB of them in my deskop as well as /home) are server grade drives and they have tons of stuff added to avoid bit rot and their Non-recoverable errors per bits read is <1 in 10^15. The very fact that they even state such a number shows that WD knows about bitrot.

                  However as a test you should power them off and put them in a cupboard for 10-20 years and then power them up and redo the scans again to see what have happened.

                  Originally posted by andyprough View Post

                  That's not an incident report about data loss due to bitrot with ext4, that's just an article on the advantages of COW, specifically cheerleading the use of btrfs. I probably used btrfs before nearly anyone else here, as I was a dedicated SuSE Professional user from the early 2000's through about 2018 while the rest of you goobers were using Ubuntu and Windows and Hannah Montana Linux and so forth.

                  bitrot, if it's real (unlikely) and not just data loss due to some scriptkiddie setting up raid based on google searches (likely) or using crap cables (highly likely), is just one more of the many reasons to have an ironclad backup plan. It's no reason to avoid ext4, which is one of the most performant desktop file systems available to us.
                  ​Backup is not a solution to bitrot and just as cynic wrote once you encounter bitrot you will most likely also discover that all of your backups have them as well. And if we talk about backups I have personally experienced tons of bitrot on backup tapes.
                  Last edited by F.Ultra; 04 December 2023, 04:09 PM.

                  Comment


                  • #79
                    Originally posted by F.Ultra View Post

                    a) WD Red Pro drives (I have two 8TB of them in my deskop as well as /home) are server grade drives and they have tons of stuff added to avoid bit rot ....

                    b) The very fact that they even state such a number shows that WD knows about bitrot.

                    c) However as a test you should power them off and put them in a cupboard for 10-20 years and then power them up and redo the scans again to see what have happened.
                    Ad
                    a) That ton of stuff does not exist. All disk manufacturers that are left today know how to make a good magnetic coating.

                    b) Are you kidding? No manufacturer would survive if they would not take care of that.

                    c) Regarding bitrot does not depend on the platter resting or rotating. It's temperature might have more influence. The danger of this effect really comes from the situation that a huge part of the data gets written once and rests on the platter for decades.
                    Writing the whole partition as I described remedies that problem.

                    Comment


                    • #80
                      Originally posted by LinAdmin View Post

                      Ad
                      a) That ton of stuff does not exist. All disk manufacturers that are left today know how to make a good magnetic coating.

                      b) Are you kidding? No manufacturer would survive if they would not take care of that.

                      c) Regarding bitrot does not depend on the platter resting or rotating. It's temperature might have more influence. The danger of this effect really comes from the situation that a huge part of the data gets written once and rests on the platter for decades.
                      Writing the whole partition as I described remedies that problem.
                      a) yes they do, they have e.g different sensors for vibration and what not that does not exist on normal consumer grade hdds, one of the reason why the MTBF is so much higher on these than on the consumer versions.

                      b) you do realise that this number indicates that they are fully aware that there is a chance that the same read does not return the exact same data every time? Now that is a very low chance (since again this is a server grade hdd) but it is still there and it is a number they have figured out what it is (since their other drivers have different numbers).

                      c) data stored on hdds are via magnetic fields so unplugging the drive for 10-20 years have a high chance of altering that magnetic field. This is very simple physics. Not sure why you think constantly having to rewriting the whole partition is somehow a better solution than simply adding checksums ala zfs, btrfs and bachefs, but hey you do you.

                      I have been driving a car for 31 years now and have never ever had a need for either a seatbelt nor an air-bag, going by your logic I should now rant on car forums that car crashes is a myth...

                      Comment

                      Working...
                      X