Announcement

Collapse
No announcement yet.

OpenZFS Is Still Battling A Data Corruption Issue

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    Originally posted by Developer12 View Post
    the same ZFS bug
    I deleted my previous comment, sorry for the misunderstanding.

    I was thinking primarily of the multiple block cloning issues that were referenced by 2.2.1.

    (Hadn't really paid attention to the article.)

    Comment


    • #62
      Originally posted by Developer12 View Post

      Your entire point is moot because this exact same news was already covered a few days ago.
      Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

      ‚Äč
      Was my english comment so bad that it couldn't be understand or did you just only read a few words in and ignored the rest, if the latter you made my point.

      I described how the headline was really easy to overlook and it's more like "openzfs released" news 1 and news 2 it has a bug.

      While the bug is not even specific in my understanding to this version, so yes if the original news would be not so misleading that would maybe be a point. But 90% of people don't read even every word in a 20 word long headline and neither the whole article.

      So such a hidden subheadline about a horrible bug, at word 10, is not a good coverage, therefor I am thankful for this news.

      And because there are now more comments on this news proofs my point, many did not catch this hidden news, otherwise the older news about the same topic would have more comments than this one.

      Comment


      • #63
        Originally posted by blackiwid View Post
        And because there are now more comments on this news proofs my point, many did not catch this hidden news, otherwise the older news about the same topic would have more comments than this one.
        I will support you on this. I skipped over the last headline without ever noticing anything about a data corruption bug, just saw a new release announcement and moved on since I wasn't particularly interested.

        Comment


        • #64
          Originally posted by smitty3268 View Post

          I will support you on this. I skipped over the last headline without ever noticing anything about a data corruption bug, just saw a new release announcement and moved on since I wasn't particularly interested.
          Thanks, yes I dismissed the critique based on that, otherwise the question would be if the first headline would been more Easy starting with at least "Bug in Openzfs ..." or even "Data Corruption Bug in OpenZFS..." would it been ok to have a second news not having much new information.

          And even then I think this news makes sense, because I just read the original and new News again, there is news, the old article sounds like to not have this bug happen you just need to disable this block feature and have the newest "fixed" version. And it's pretty much limited to the 2.2.x Versions.

          The new news says that pre 2.2.x versions are also buggy and that even the 2.2.1 version is still not solving the problem even if you deactivate the new feature.

          As in the news "Over the Holidays it became more clear..." So this is a update, and because Phoronix has no News-update system that ads to old news, and pins them upwards like the German site "computerbase.de" uses as example, I see no other way to give newer news information to the readers on this site.

          So even if you dismiss my argument with the not easy to see news, that could have come from a PR Team from the OpenZFS team or some fanboy group, to downplay the problem as much as possible (not that I claim that Michael had this intention I just say that a person with that mindset would be very happy with this way of framing / articulate the headline), but again even if you say this was good enough news to alarm all people from the danger and well like report news even for people that just want to make a rant / flamewar out of it. What people do with news is not the responsibility of a news website, at least as long as they behave on their platform according to the laws.

          So even if you somehow dismiss it, I still think the claim that it must be because he wants the extra clicks is valid. Now we can't measure what his real motivations are, but you don't judge a action primary for the motivation for it, if somebody wants to be rich and therefor saves a person from a accident the saving will still be seen as positive, and here if he gives more information towards people some to get more information because they use the FS some for Arguing or future decisions whatever it is, at least the action is neutral if not positive, even if the only motivation would been his bank account numbers.

          But maybe I am wrong but somehow I doubt that he gets super rich from having like what 50 people discuss a few hundred comments under a news and what 10.000 people read the news? which 90% use Adblocker and a few more use the abo to not watch ads anyway.

          I don't know the exact numbers but I doubt that this news makes him a big dollar amount. With the time researching keeping a eye on it and writing the articles the probably few hundred dollars he makes specifically for this news? So correct me if I am wrong, if he would make so much money from 1 slightly alarmous news article I would expect the feed full of hyper sensational news.


          I think he is just not rational criticizing that he is in teamsports, team OpenZFS and he wants to not see them criticized much for it, because he has a bias that is much more likely truth about this than Michael this evil mastermind obviously to everybody maximizing the damage to ZFS to get rich...
          Last edited by blackiwid; 29 November 2023, 12:48 PM.

          Comment


          • #65
            Originally posted by Developer12 View Post

            Should michael write an article about every issue opened on their github about this bug? there have been over five of them and counting. How about for every comment? Every time a ZFS developer weighs in?

            He's just double-dipping on the same ZFS bug twice in the span of a week because ZFS/BTRFS flamewars generate tons of engagement, and thus ad revenue.
            I don't recall Michael being bound to any rule, code or law that impedes or prevents him from posting similar news.

            He's not a journalist who's bound to a nerdy version of the Hippocratic oath. He's a guy super hyped on Linux, and did what a lot of people would never consider and went out on his own to talk about it. Fast forward twenty years, his primary source of income has dwindled immemsely, despite his payers making billions, and all you have is;

            'reposting a filesystem issue twice blahblihblooblah'

            No offence (you're going to be anyway), but he's got to eat. If it means I have to revise an existing article (and I personally like filesystem stuff anyway so I am biased), I daresay my eyes are not going to bleed.
            Hi

            Comment


            • #66
              Originally posted by muncrief View Post

              I have around 12.5 TB of data, with the majority 11 TB residing on my media server. And though I have local and cloud backups going back years, the problem with ext4 was that I couldn't detect bit rot. And by the time I discovered a bad file I would have no idea when it became corrupted. My hope was that with a monthly ZFS scrub I could detect bit rot and restore good files from my backups, rather than buying two or three times the amount of storage I need and creating complex RAID systems that had also failed me so many times over the decades.
              I've got 300TB on ext4. It's really 100TB but 3 copies total (1st main file server, 2nd local backup, 3rd remote file server)

              I will start by saying that not counting hardware failures I have about 1 silent bit rot event every 2 years. This is where the data was good on disk and somehow has been corrupted. Cosmic ray? Who knows. With only 12.5TB of data you may experience this once every 50 years.

              I tried testing btrfs multiple times and I've had issues with it that are a longer discussion. I don't like ZFS being outside of the kernel and its drive expansion method isn't flexible enough for me. So I've stuck with ext4.

              I really like the concept of the data integrity verification method being SEPARATE from the filesystem code itself. With this recent ZFS zeroing out files what happens to the checksums when this happens? Does the ZFS scrub show everything is fine even though the data has been corrupted?

              I started out running md5sum recursively (md5deep -r does this more easily) and then diffing the output with the previous results from 6 months earlier. About 10 years ago I switched to this program that stores SHA256 checksums and timestamps as ext4 extended attribute metadata. Run it again and it recaculates, compares, and reports.

              Detect silent data corruption under Linux using sha256 stored in extended attributes - rfjakob/cshatag


              I also run snapraid to dual parity drives once a night. It also has a built in scrub feature. Then I create my backups with rsync -X which transfers the extended attributes. So basically every 6 months I run snapraid scrub, verify the local copy is correct, run cshatag locally, then rsync -X to local backup and remote backup, then cshatag on the local and remote backups. This takes about 3 days total but it's just run time in the background and slightly louder fans.

              Again, I consider it a FEATURE to have this NOT unified into one comprehensive filesystem. I like a second independent tool verifying the correctness of the first tool.


              Comment


              • #67
                Originally posted by whatever78 View Post

                Detect silent data corruption under Linux using sha256 stored in extended attributes - rfjakob/cshatag

                Interesting tool. I feel uneasy those filesystem that claim capacity of detecting silent corruption. When bit flip corruption occurs, throwing the whole file away is not always the best solution. But all the praises of those new file systems sounds like it is the only thing that one shall do or must do.

                Comment


                • #68
                  Originally posted by grahamperrin View Post

                  except the experts who understand, and have fixed things.

                  You're welcome.
                  After losing my data? No thanks :P

                  Comment


                  • #69
                    This is a comment from the man who wrote the patch to fix it. Bottom line: don't panic. Most people don't understand how unlikely they are to be affected.

                    Comment


                    • #70
                      Originally posted by stiiixy View Post

                      I don't recall Michael being bound to any rule, code or law that impedes or prevents him from posting similar news.

                      He's not a journalist who's bound to a nerdy version of the Hippocratic oath. He's a guy super hyped on Linux, and did what a lot of people would never consider and went out on his own to talk about it. Fast forward twenty years, his primary source of income has dwindled immemsely, despite his payers making billions, and all you have is;

                      'reposting a filesystem issue twice blahblihblooblah'

                      No offence (you're going to be anyway), but he's got to eat. If it means I have to revise an existing article (and I personally like filesystem stuff anyway so I am biased), I daresay my eyes are not going to bleed.
                      In other words, you're saying michael is not bound by any integrity and is free to be a shill. That's not the strong argument you think it is.

                      Comment

                      Working...
                      X