Announcement

Collapse
No announcement yet.

ZFS On Linux 0.7.8 Released To Deal With Possible Data Loss

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by jrch2k8 View Post

    ...
    I appreciated your long comment above, as it made me think through the various scenarios presented and think about what you were saying. And in one case not even related to filesystems (the x86 CPU must not be very secure to other options our there (I'm thinking they must be big iron stuff, made for these kinds of things.) Anyway, I know enough to critically think through the comment and learn some things, but sometimes it takes an informative post (I'll assume it is mostly correct, but for my sake it was more the examples and how needs change depending on requirements.) So good post for someone like me.

    Comment


    • #32
      Originally posted by ryao View Post
      This is harder than rocket science. There people only need to be right a few times a year. Here we must be right in innumerable instances every day.
      Harder than rocket science? LOL! That statement is 100% bullshit.
      I don't think I'm going out on a limb here by stating that you've never in your life worked at NASA or equivalent.
      Are file systems specialized areas? Certainly.
      Are you exaggerating here? Absolutely.

      Comment


      • #33
        Been using ZFS and Btrfs for years (Btrfs longer).

        If I recall correctly I never had any data loss on data partitions both with ZFS or Btrfs (not 100% sure with Btrfs since there were several cases of irrecoverable data loss, grave corruption on root ( / ) partition so there also could have been 1-2 cases of issues on data partitions).

        The most conspicuous issue with filesystems for me (so far) was that the system suddenly become non-booting or only could get into a form of basic system with lots of crucial binaries and libraries missing due to data corruption - and that happened back when I was still using reiser4 at an early stage which got fixed over time - the same happened with Btrfs but multiple times, there were issues with ext4 and data corruption during its early days but after that it seemed to have been rocksolid (also on my android phones with 3.4 and 3.10 kernels).

        Delayed allocation and longer periods can be quite an issue (seen on my Android phones and when trying to "tweak" longer allocation delays on my desktop with ext4) and/or laptop mode

        ENOSPC errors and issues are still being reported on the btrfs mailing list regularly (latest a few months back if I recall correctly, same experience for me at the beginning of the year).

        Thus the track record so far in terms of reliability and data retention is:
        - ZFS best (so far, *knock on wood*) - it could more faster though (performance improvements are arriving in from time to time)
        - Btrfs is rather unstable for the system (root) partition [never used any snapshots] also had scrambling/data corruption after hard lockups where data was severely corrupted and system wasn't bootable anymore but on data partitions it was a real data saver (detected silent sector corruption on 2 harddrives; data intact) - still using it on the system and portage partition due to using full checksumming filesystems on all partitions
        - ext4 is fast and performant, had data corruption / loss in the beginning with its delayed allocation and after crashes & hard lockups but so far it seems to be pretty reliable (mostly using it on my Android phones); been using it on data partitions on the past - didn't encounter any issues but there might have been some silent data corruption (discovered issues with older MP3 files that were migrated from ext4 /home to newer filesystem /home)
        - reiserfs - been using this for a very long time on data, system and /portage tree partitions - I can't seem to recall any issues with data corruption or data loss from personal experience - there were issues due to upstream changes and code cleanup (even loss of data and/or corruption) but luckily I wasn't affected - so far the code base seems to be stable (?) - can't really tell since I'm not following the mailing lists that closely anymore
        - (sidenote) jfs was pretty solid when I used it a few years ago - fast & stable, no data issues - can't tell about silent data corruption due to lack of checksumming though
        - (sidenote) xfs caused me issues in the early days of Gentoo usage with hard locks, or forced reboots (power button on & off) - which ended up corrupting & losing data, leaving the system in an unbootable state - so I'm a bit wary of that filesystem on the root partition, had used in on data partitions for some time and didn't encounter any issues - not sure about silent data corruption or files going missing during hard lockup though

        So: you can't generally judge a filesystem mature, immature or stable, unstable - it really depends on the usage case and in specific "niches" (use-cases) it's better than others - each filesystem has their raison d'ĂȘtre

        Last edited by kernelOfTruth; 11 April 2018, 01:26 PM.

        Comment


        • #34
          Originally posted by kernelOfTruth View Post
          Been using ZFS and Btrfs for years (Btrfs longer).

          If I recall correctly I never had any data loss on data partitions both with ZFS or Btrfs (not 100% sure with Btrfs since there were several cases of irrecoverable data loss, grave corruption on root ( / ) partition so there also could have been 1-2 cases of issues on data partitions).

          The most conspicuous issue with filesystems for me (so far) was that the system suddenly become non-booting or only could get into a form of basic system with lots of crucial binaries and libraries missing due to data corruption - and that happened back when I was still using reiser4 at an early stage which got fixed over time - the same happened with Btrfs but multiple times, there were issues with ext4 and data corruption during its early days but after that it seemed to have been rocksolid (also on my android phones with 3.4 and 3.10 kernels).

          Delayed allocation and longer periods can be quite an issue (seen on my Android phones and when trying to "tweak" longer allocation delays on my desktop with ext4) and/or laptop mode

          ENOSPC errors and issues are still being reported on the btrfs mailing list regularly (latest a few months back if I recall correctly, same experience for me at the beginning of the year).

          Thus the track record so far in terms of reliability and data retention is:
          - ZFS best (so far, *knock on wood*) - it could more faster though (performance improvements are arriving in from time to time)
          - Btrfs is rather unstable for the system (root) partition [never used any snapshots] also had scrambling/data corruption after hard lockups where data was severely corrupted and system wasn't bootable anymore but on data partitions it was a real data saver (detected silent sector corruption on 2 harddrives; data intact) - still using it on the system and portage partition due to using full checksumming filesystems on all partitions
          If you are not using subvols/snapshots for BTRFS, you probably shouldn't bother using it. MDADM/ext4 or XFS might be better. I don't see the wisdom in using a slower COW file system if you are not snapshotting. Snapshotting is a great advantage especially if you like to change stuff or save 'system states'. Works fine for me as a rootfs, I do use other file systems for other parts of the file tree, like for virtualization, containers, small data files, etc... But as a rule, most of the files in my 'rootfs' don't get touched very often, and are usually modified by package management or a user.

          ... and dang, missed my chance to flame the mission ready, critical data able master race file system. #shucks.

          Comment


          • #35
            Originally posted by pcxmac View Post

            If you are not using subvols/snapshots for BTRFS, you probably shouldn't bother using it. MDADM/ext4 or XFS might be better. I don't see the wisdom in using a slower COW file system if you are not snapshotting. Snapshotting is a great advantage especially if you like to change stuff or save 'system states'. Works fine for me as a rootfs, I do use other file systems for other parts of the file tree, like for virtualization, containers, small data files, etc... But as a rule, most of the files in my 'rootfs' don't get touched very often, and are usually modified by package management or a user.

            ... and dang, missed my chance to flame the mission ready, critical data able master race file system. #shucks.
            Even if you don't use snapshots, COW is a good thing from a data integrity point of view. It's like journaling the data itself, not just the metadata. It reminds me of Linus once ranting against one of the filesystems in Linux (don't remember which one), saying that the focus on ensuring coherent metadata was basically missing the point: users care about their *data*. No one gives a sh*t about metadata, it's only there to make it possible to store the data. If the filesystem can guarantee that its structures are consistent but the files may actually end up containing garbage, then, according to Linus, it's pretty much useless. I think he's absolutely right about that.

            Subvols actually have nothing to do with COW, it just so happens that the two FSs who support them, ZFS and BTRFS, also use COW, but technically nothing prevents a traditional FS like XFS, ext4 (or even FAT!) from implementing subvolumes.

            Comment


            • #36
              dunno about that, COW is inherantly able to do subvols/snapshots because it doesn't overwrite old data. EXT4 supports data integrity checking with in it's metadata. Qemu has a couple formats that are COW which also support snapshots. I believe VirtualBox also have a Snapshot capable file spec which is also COW. How many non-COW f/s support snapshots elegantly? The reason you want to use COW is because it supports things like snapshots, otherwise you are better off with a better performing file system like ext4 and backing up like you should always do regardless of the file system. I use ZFS for bulk storage, thats what ZFS is good for, spinning rust and reliable backups. commission a pool, use it, when its no longer useful, decomission it, do what ever you want with the disks, etc... BTRFS has other usecases like image seeding and simplistic/intuitive snapshot support. MDADM+ , ext4, xfs, flashfs, etc, all great file systems which have their own usecases.

              now, if btrfs supported seeding at the snapshot level, not the device level, things like overlayfs would be completely superseded for some usecases. I would love to mount an empty subvol over an existing one at boot to capture changes. Also if BTRFS (haven't looked in a while) supported 'diff' ing the subvols, that would be cool too. Lots of things BTRFS could do better, and interesting ways it could reshape the way people see their file systems.
              Last edited by pcxmac; 12 April 2018, 01:38 AM.

              Comment

              Working...
              X