Announcement

Collapse
No announcement yet.

ZFS On Linux 0.7.8 Released To Deal With Possible Data Loss

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • ZFS On Linux 0.7.8 Released To Deal With Possible Data Loss

    Phoronix: ZFS On Linux 0.7.8 Released To Deal With Possible Data Loss

    If you have been using ZOL 0.7.7 that was released last month, you will want to upgrade right away to ZFS On Linux 0.7.8...

    http://www.phoronix.com/scan.php?pag...On-Linux-0.7.8

  • #2
    bugs happen. its good it got caught now, rather than 6 months down the line.

    Comment


    • #3
      So, with Btrfs, a power failure during write may cause inconsistency in parity on RAID 5/6. With ZFS, a released version can cause data loss on any file system, without power failure.
      What people calling Btrfs unreliable and encouraging Btrfs users to switch to ZFS have to answer?

      What do I learn from this? Ext4 is the good chose if you what something reliable and don't need advanced features. Btrfs is good if you want advanced features and don't absolutely need FS RAID 5/6 (you still have hardware and MDADM RAID). The only case where ZFS may be the good choice is if you absolutely want FS RAID 5/6 and can't use hardware/MDADM RAID, of if you want encryption built in the file system, without use of any other encryption software.

      Comment


      • #4
        According to the bug report (https://github.com/zfsonlinux/zfs/issues/7401), this is quite easy to reproduce. Which makes me wonder, how the official releases are tested. Running a test suite on the release branch should have caught this one.

        Comment


        • #5
          It's a bug and it happens, especially in a huge project like this. Which is why you should never just blindly run bleeding edge versions in production until some time passes (*caugh* Fedora *caugh*).

          Comment


          • #6
            Originally posted by ALRBP View Post
            So, with Btrfs, a power failure during write may cause inconsistency in parity on RAID 5/6. With ZFS, a released version can cause data loss on any file system, without power failure.
            What people calling Btrfs unreliable and encouraging Btrfs users to switch to ZFS have to answer?

            What do I learn from this? Ext4 is the good chose if you what something reliable and don't need advanced features. Btrfs is good if you want advanced features and don't absolutely need FS RAID 5/6 (you still have hardware and MDADM RAID). The only case where ZFS may be the good choice is if you absolutely want FS RAID 5/6 and can't use hardware/MDADM RAID, of if you want encryption built in the file system, without use of any other encryption software.
            Wow that escalated quickly, calm down, this is a pretty niche issue to start with and is affecting only the current release that is quite new and it doesn't make your data away or kill kittens simply stop the current copy with an out of space condition if it fits the exact condition and to be entirely honest it took me a while to reproduce it on ArchLinux(Centos is easier).

            is so niche I didn't even downgrade version on any of my server and workstations(I'll have them under a looking glass just in case) and so far all good, neither my raidz 60 or mirrors have presented any issue.

            about what you learned, well you want ZFS when you need safety and enterprise features, I've been using it since SUN Solaris10 days all the way until today and so far 10+ years later never had to use a backup because of ZFS, never had to stop a server to replace a hard drive(on hot swap capable hardware of course), never had any issue resilvering a pool after a failure, never an electric failure has corrupted a pool or data in any way even with heavy dedup or encrypted sub volumes and never had any downtime moving between OSes, in fact most of my current raidZ60 system came physically straight from Solaris11/OpenSolaris to ArchLinux and I mean this literally, sure over the years I replaced all the old disks without issues but is because is that easy.

            Sure, every once in a while some stupid bug appear and get fixed but unlike EXT4, XFS, BTRFS, etc. I never seen one that actually destroy the data once is stored(outside of multiple disk failures on 1 side the raid but well nothing can protect that, this is why everyone should make backups), I mean even in the early of ZFS when Solaris 10 was the rage there was a bug that corrupted metadata under certain conditions and a day later SUN fixed it and ZFS repaired itself.

            My point is ZFS is bulletproof where it should be but that doesn't mean is impervious to bugs, nothing is, so stop panicking and trying to spread FUD, as a side note MDADM is a barely decent RAID system for RAID0(like those cheap in mobo RAID) for regular users but should never be used for storage purposes.

            Comment


            • #7
              The disappearing files are not actually gone, but orphaned. We will likely release a way to get them back in the very near future. I and others are still are doing analysis on it, so I’ll copy and paste some posts that I made on the topic on reddit and hacker news verbatim to address the questions of how we test and how we missed this when reproduction is “easy”:

              The bad patch passed review by developer(s) from other platforms. Matthew Ahrens was a reviewer. This was not merged based on unilateral review by the ZFSOnLinux developers. There was also nothing Linux specific about it.

              That said, bugs happen. We should be putting new test cases in place to help catch such regressions in the future. If we find more ways to harden the code add against regressions of this nature as we continue our analysis, we will certainly do them too.
              I should have been more comprehensive in my previous reply. Normal usage is ill defined, but reads and writes of existing files have no chance of triggering the issue. Any snapshots that contain the orphaned files will need to be destroyed in order to fully purge the damage from affected pools, but it is fine to make them in the interim. Orphaned files should not be overwritten by further usage because they are still using space allocated from the space maps.

              You are right about the touch script. Albert Lee designed it after studying syscall traces from a CentOS cp binary. Other things can definitely trigger it. I read on reddit that rclone triggered it. Extracting tar archives has been suggested to also be able to trigger it. I do not expect many systems running 0.7.7 to have actually triggered the bug though. We had a hard time trying to reproduce this on systems without an old enough version of coreutils’ cp.

              In any case, instructions on what to do to detect and repair the damage will be made available after we finish our analysis and make the tool to fix this. That tool will likely be a subcommand in 0.7.9, which I expect Brian to push out fairly quickly once we have finalized the complete solution. Reverting the patch is a stopgap measure both to stop the population of affected systems from growing and stop the orphaned file counts on affected systems from growing.
              We already had a massive set of complex test cases that are executed on every proposed patch and a proposed patch must pass all of them before it is merged to master. This one managed to get by all of them. Additional test(s) designed to catch this will be put into place to try to catch similar regressions in the future.

              Unfortunately, it is not possible to design tests that can catch every critical bug. The existing tests have prevented hundreds of such bugs from entering the code base, but there will always be bugs that can past that. We try very hard with code review and time in HEAD to help catch such things before they get to a production release, but there are bugs that can get pass that too. They are incredibly rare and we will continue to improve things so that they become rarer.

              The only way to avoid such bugs from ever getting past us would be to redesign ZFSOnLinux to be formally verifiable. Such an effort would catch and fix an enormous number of bugs, but it is not feasible with current technology.
              You might want to look here:

              http://buildbot.zfsonlinux.org

              Sadly, it is not easy to see all of the tests being run. Here is an example of one of the test runs on just 1 platform of several:

              http://buildbot.zfsonlinux.org/build...29/builds/2927

              I have not taken a look at this in a while in part because I have been less active than I used to be, but there used to be more test suites. Specifically, we used to run the XFS tests, tests to verify old pools still imported, a filebench test on ext4 on a zvol and tests on the SPL. The ZFS Test Suite from Illumos has since been adopted and expanded to over 1000 tests that take 3 hours (it was originally over 500 when we first started using it if I recall). It could be that Brian felt that the older tests were redundant, especially since I notice several tests that overlap with the older test suites, but I’ll ask him what happened to the others. The SPL Tests might be reserved for changes to the SPL now. I will likely check that in the morning.

              Also, the ztest run is stochastic testing explicitly meant to catch these kinds of bugs.
              Last edited by ryao; 04-10-2018, 10:05 AM. Reason: Added quote to address how “easy” reproduction is for us.

              Comment


              • #8
                Originally posted by ALRBP View Post
                What people calling Btrfs unreliable and encouraging Btrfs users to switch to ZFS have to answer?
                They usually meant ZFS on FreeBSD, like FreeNAS.

                This is a bug for the Linux port of ZFS.

                What do I learn from this?
                That you don't know enough to make these statements without looking totally amateurish.

                Comment


                • #9
                  Originally posted by starshipeleven View Post
                  They usually meant ZFS on FreeBSD, like FreeNAS.

                  This is a bug for the Linux port of ZFS.
                  Ubuntu and Debian were not affected either because they were still on 0.7.5. I do not know if FreeBSD-CURRENT merged the patch. It should have met all of the criteria needed to be merged though.

                  Comment


                  • #10
                    Originally posted by ALRBP View Post
                    What do I learn from this? Ext4 is the good chose if you what something reliable and don't need advanced features.
                    Ext4 + Mdadm is about as rock solid as it gets. I won't be switching away any time soon.

                    Comment

                    Working...
                    X