Announcement

Collapse
No announcement yet.

OpenZFS Is Still Battling A Data Corruption Issue

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    If u don’t backup to tape u a fake

    Comment


    • #32
      Originally posted by AlanTuring69 View Post

      Do you seriously think that Bcachefs will have fewer issues with it being a purely for-fun project and having Kent as the effective leader? ZFS has been around for a long, long time and has swallowed very little data. Bcachefs cannot possibly compete with ZFS unless it turns out it's the literal holy grail and someone spends millions on it. I've not seen any data to suggest that it is anything other an a very interesting filesystem of which there are many.



      What you're describing is a professional, commercialized development process typically seen with serious projects and real, paying customers. OpenZFS is not a particularly commercialized project with mostly hobbyist / borderline-hobbyist involvement. What you're describing is at least several full-time jobs (more like a dozen just on the engineering front, if you want features and effective testing) which also does not translate well to projects such as these. If you want to establish a company which runs its own spin of ZFS with such architect-led decision making then I'm sure people would use it, but unless you have millions to spend for zero gain then it's not happening. It's outrageous to expect anything other than what's happening. Put even more simply, I think they know. I am grateful to have a fantastic filesystem work with the latest Linux kernel, for free.

      With that said, this is the first time that I can remember something like this happening which is fewer than any other project of a similar lineage / userbase that I can recall.
      I respectfully disagree AlanTuring69.

      From perusing both the ZFS and BTRFS threads the developers appear very intelligent, skilled, and motivated. I don't believe it's a question of manpower, I believe it's a question of leadership and organization, as they appear to have everything they need except that. If not they certainly wouldn't have come as far as they have so haphazardly.

      I greatly value and respect the work and accomplishments of both groups, but it is painful to observe them consistently coming so close to their goals, and then collapsing because of their lack of cohesion and direction.

      They need only set their egos aside and elect chief architects with the will and vision to organize and lead them, and within a year or so both projects could right their ships and sail much sunnier days.

      Comment


      • #33
        Originally posted by ddriver View Post

        It is best to have an off-planet backup, preferably in another stellar system, in case of a mega flare or something.
        Unfortunately this won't be of use in the event of an unexpected early heat death of the universe. I prefer to store backups in alternate dimensions.

        Comment


        • #34
          OOoohh, ....what was it they said.... "ZFS superior, BTRFS inferior..." ROFL.... roll-eyes!

          Meanwhile, ... I'm enjoying my 7 year old BTRFS RAID5 array that has grown and grown over the years and now with multiple clones saving my bacon each time a disk fails...

          Comment


          • #35
            They should change the name to ZeroFS, because it zeroes users data. An only quite good thing that came from slowlaris camp is now broken. Nice.

            Comment


            • #36
              Overcomplicated and overengineered FS has bugs and behaviors nobody understands. News at 11!

              Just use Ext4 folks and keep backups of the really important stuff.

              Comment


              • #37
                So I am surprised that nobody refered to the hundrets of btrfs haters that said, just use ZFS because it's not as buggy and now that.

                That might be a bit petty but it has to be said, the claim was that all the horrible disadvantages of having something outside the kernel and NEVER will be in the kernel is worth it, not because 1 or 2 specific features that all people needed, but because it's not "fundamentally broken".

                Now btrfs had some bigger problems in it's early years, not one big corporation going bankrupt in funding zfs-os (Solaris), that was a bad excuse of a OS just to develop a Filesystem. But that is much more excusable to have such bug in early status and then get better and better slowly over time than having for many years a pretty stable state making people trust the brand and using it and then introduce data eating bugs.

                Now I don't want to say it's the world ending it will not thousands or millions of people loosing data, but you can't have it both ways, btrfs also seldom eaten data, so if that was conceived horrible and lead to many years of urban myth hate threats, you can't just suddenly become hyper rational and understanding, when the bug is on the other big modern FS.

                But sorry this had to be said, I ad another comment to react to the critique that somehow used this ZFS bug to bitch about BTRFS, and this older 12TB guy...​

                Comment


                • #38
                  Originally posted by gggeek View Post
                  In the open source world, it looks like developer burn-out is increasing year over year (at least in my sphere), leading to an "only scratching your own itch is enough" mentality.
                  I'm sorry but "scratching your own itch" is the basis of open source, not some "bad mentality". Some people take on more than just that but OSS does not imply any kind of guarantees. If anything all the licenses clearly state that you use at your own risk. Anything else would be slavery. You cannot expect even the author to be responsible for OSS software, it just doesn't work like that.

                  If anyone wants some sort of guarantee then they have essentially only one choice, commercialize on top or commercially create your own.

                  Thing is though, commercial versions would be possibly even worse unless their whole business model was robustness (for which almost nobody would pay anyhow).

                  Comment


                  • #39
                    Originally posted by muncrief View Post
                    As it appears that, at least for now, there's simply no way to reliably detect bit rot and other data integrity issues and be assured they can be remedied.
                    You trying to use a hammer (FS) for the wrong problem a screw. I am sure that is not the only solution but with all the effort you seem to take to get this working using git-annex as one solution I know of:



                    The way it basically works is that it checksums the files in Git 1 time and has in your git directory the checksum for the files and the names but not necessarily all the content local. then you can define how many copies you want from it minimum so that it only let's you delete it from one location when enough copies are somewhere else. And the backends can be cloud servers, external drives, whatever you can think of blueray disks etc:



                    Now you need probably some cronjobs or something to check the copies to make sure that not 2 backups at the same time have bitrot.

                    I thought with your description on the "use case: The Archivist" which you can read on the front page.

                    For the cronjob script to check I think git annex fsck is the command to use with some parameters I guess?

                    And for fear of bitrod in the git data you can have multiple hosts it's more like a git powered "dropbox" if you want than a filesystem.

                    But also with Filesystem if you don't trust one have your backup and your main data just with different Filesystems so it's very unlikely that both have big bugs at the same time, that's how Airplanes or similar systems add security have at least 2 (or 3?) different systems for every critical problem.

                    The only thing is that by default the files are basically hardlinks in the git directory or was it softlinks, whatever it was it did not well with kodi when I tried it, but that depends on the backend or methods or setup you choose there is surely some way to make it work.
                    Last edited by blackiwid; 27 November 2023, 09:00 PM.

                    Comment


                    • #40
                      Originally posted by muncrief View Post
                      Until then they will continue to flap in the wind, playing whack a mole with one of the most integral parts of a computing system.
                      It's a decision both solutions have advantages and disadvantages. The process you are describing must result in fewer features in the same time, it can't be different. Now if you have a idea or are in a flow state or want to have some feature or whatever, you can create a feature-branch first local and then on a big project and then people can test this feature way earlier and it can be integrated earlier.

                      You sound like a big boss defines the features let's say 20 and then 10 workers do everything to reach that towards some dead line with all the requirements of documentation and everything around it. But nearly never will there pop up a feature not requested in that process.

                      It's a very hierarchical process where you have some sort of dictator (to different degrees) on the top. That process assumes nearly that everybody works in the same building at the same times or at least similar. When Opensource is a world wide process.

                      I disagree that development is like a fabric worker that is clearly defined and has nothing to do with creativity just defined processes, Sure there must be some processes mostly Software tests that are defined at some level, with the kernel probably mostly that starts in linux-next, but people get more productive in a flow state, you can't manufacturing in very tunneled controlled environment a flow state.

                      You have than unhappy people follow the dictate of the company and do work in a way to satisfy the bureaucracy that it looks they have done enough.

                      Again I repeat myself you use the wrong tools for the problem, either the wrong type of software or just you want enterprise level of file security but want to use a bleeding edge "stable" kernel.

                      As if Windows would just be tested for 1 month before it get's released, a "stable" kernel is not stable for a extreme production level for backups of important data, I am sorry.

                      Yes back then Version Control systems were slow and "expensive" but of course now things work differently.

                      Also just reading on a mailing list doesn't give a complete picture, it's not just the best expert the authors of the software that answer and most of this products hope for this free workers that write documentation or try to help others, if they fail eventually if important enough the maintainers of the specific subsection will answer and give you very competent answers / fixes etc.
                      Last edited by blackiwid; 27 November 2023, 09:30 PM.

                      Comment

                      Working...
                      X