Announcement

Collapse
No announcement yet.

SSDFS Is The Newest Linux Filesystem & Catering To NVMe ZNS SSDs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by stormcrow View Post

    Citation needed. Most of the cases I can think of are already covered by device hardware level ECC or beyond the FS layer to fully manage (like a device not acknowledging cache flushes). If there is no hardware ECC in the data stream to the device, then no FS level correction is going to work either, because the data is already corrupt before it reaches the disk. If the device itself is regularly writing corrupt data, you've lost entirely and the FS can only detect it, not correct it (since you can't trust the writes not to be corrupt, including the parity writes). If the FS code is writing glitchy data, then you've got a failure case where it probably doesn't matter if there's parity data or not, and none of it can be trusted without full code path verification. In that case, you'd better have good backups.

    Cosmic rays? Doesn't work against HDDs at least as far as I know. It does against SSDs, but silent single bit flips are within the realm of built in hardware ECC algorithms rather than the FS layer (again). NAND flash devices are probably (the linked article is about Intel) across the board loaded up with ECC algorithms to prevent known and unknown sources of errors. Just beware of cheap no-name devices that suffer from high phantom cell discharges and Samsung with their rapidly decreasing robustness (again, not a FS layer problem). :P
    My personal experience is:

    Bitrot on ext3, ext4 and ReiserFS on plain HDDs. I have lots of photographs which I discovered bitrot in. This is why I switched to Btrfs a long time ago.
    ​​​​​​
    Last occurrence of bitrot was two years ago, but Btrfs detected it. This time it was on a Samsung 830 SSD. I believe it happened just after a solar flare, so maybe it was cosmic rays...

    At work we have experienced HW RAID controllers that introduced corruption due to fw bugs. We have also seen issues when running VMs with backing stores over iscsi and nfs.

    So, yea, there are plenty of examples. If you follow the btrfs mailing lists we see many people with bad firmwares, bad implementation of write barriers, bad USB-sata controllers, etc, leading to corruption, even when the drives themselves are OK.
    ​​
    And as you admit yourself, there are plenty of bad devices out there too. But how do you know this as a normal user?

    Just beware of cheap no-name devices that suffer from high phantom cell discharges and Samsung with their rapidly decreasing robustness (again, not a FS layer problem).

    Comment


    • #32
      Originally posted by S.Pam View Post

      Examples of errors include memory and controller corruption, broken write barriers, and otherwise bad firmware.

      Just be warned that traditional RAID (HW RAID, mdadm, bios/sw RAID, etc) can not detect and correct corrupt data on individual drives unless the drive returns a read error.

      Btrfs and ZFS are at the moment the only usable filesystems on Linux that can detect and correct data corruptions. Btrfs have a DUP profile that can be used on single devices, to allow automatic repair.
      mdadm + dm-integrity can do that too.
      i used that for a while before i switched to zfs.
      it's not really user friendly though. but i really loved that i can use an external device to store the checksums

      EDIT: If you store the parity on an external device you dont have that performance hit and you can mount the device without dm-integrity at all
      Last edited by flower; 26 February 2023, 07:24 AM.

      Comment


      • #33
        Originally posted by Velocity View Post
        Is there any benefit this filesystem provides over the already existing ZFS filesystem? Or is this just a GPL abomination of a superior filesystem, and it just comes down to the license wars i've witnessed for decades now. At least ExFat works on multiple operating systems so it is cross-platform. This will be linux-only, not even supporting BSD?
        If its GPL then it's benefit in itself. Unlike zfs or bsd based crap.

        Comment


        • #34
          Originally posted by loganj View Post
          we sure needed one more fs. there are so few fs on linux
          Yeah, every FS is the same. Just because you have one broken file system to choose from on your windows machine that doesn't mean it's the rule. Or better, let's use bloated zfs on Android devices.

          Comment


          • #35
            Originally posted by Velocity View Post
            out of tree ZFS IS SIMPLY fcked, DO NOT FIGHT IT! - use fully supported, in tree GPL BTRFS, my message to you people...
            Fixed this for you. Slowlaris is dead, zfs died with it.

            Comment


            • #36
              Originally posted by Eumaios View Post
              NILFS2 sounds really interesting. Do any distros support it? (That is, for root partition. The kernel.org page says you can download it from sourceforge--and then presumably use it for other partitions?)
              It depends what you mean by 'support'. I think the short answer is no, but if you are technically capable, it is reasonably easy to move an existing installed system onto a nilfs2 root.
              1) Set up your nilfs-formatted target filesystem(s)
              2) Boot into a 'live-CD'
              3) Copy (rsync) the existing filesystem across to the nilfs2-formatted filesystem, then chroot and modify fstab and a few other things*.
              4) Reboot.

              *Some distributions do not include nilfs2 in the default modules used in GRUB, so the GRUB setup needs modifying, if you are using GRUB as a bootloader.

              It's certainly doable. I've been running Lubuntu on my 'daily driver' laptop with a nilfs2 root (and home) for years (having done the above), and am doing the same with Mint, even though it is not officially supported (i.e. not an easy option in the installer). I like nilfs2. It's reasonably important not to let the disk become full, and it can cause problems, especially if the full disk coincides with memory-stress.

              You don't need to download from source - apt install nilfs-tools will do the job,

              I have never needed to resort to restore from an old backup due to a nilfs2 failure. I have had to do so once in the last 8 years because of a bug in separate (non-nilfs2) driver code, which wrote corrupt data and metadata.

              Note that nilfs2 does not have a working fsck​, so if it ever does decide to be unhelpful, you will need a reasonable backup. But everyone has good backups, don't they?

              Note also -- Features which NILFS2 does not support yet (from https://docs.kernel.org/filesystems/nilfs2.html) :
              • atime
              • extended attributes
              • POSIX ACLs
              • quotas
              • fsck
              • defragmentation
              ​this means systemd will complain a bit. If any of the above features are important to you, nilfs2 isn't for you.

              Comment


              • #37
                Originally posted by flower View Post

                mdadm + dm-integrity can do that too.
                i used that for a while before i switched to zfs.
                it's not really user friendly though. but i really loved that i can use an external device to store the checksums

                EDIT: If you store the parity on an external device you dont have that performance hit and you can mount the device without dm-integrity at all
                Pretty sure ButterFS has this option as well.
                Hi

                Comment


                • #38
                  Originally posted by brucethemoose View Post

                  Yeah it is cool, but its also a heavy, high overhead FS. Its not a replacement for, say, f2fs on android or any of the simpler FSes for root partitions on single-drive laptops. Or even just a game drive where integrity and redundancy isnt as important as performance and task energy.
                  Properly tuned ZFS works fine with 768MB RAM. Most android devices these days have 4GB+

                  Nothing would stop you from using on single drive laptop either. Been there, done that. Wont get all the fancy features this way but still more than from bog standard Ext4/UFS2
                  Last edited by aht0; 27 February 2023, 03:31 AM.

                  Comment


                  • #39
                    Surely ZFS could become more optimized for single and mirrored NVME configurations. I'd prefer to stick with that and have no interest in filesystem hopping.

                    Comment


                    • #40
                      Originally posted by aht0 View Post

                      Properly tuned ZFS works fine with 768MB RAM. Most android devices these days have 4GB+

                      Nothing would stop you from using on single drive laptop either. Been there, done that. Wont get all the fancy features this way but still more than from bog standard Ext4/UFS2
                      I'm not saying it wont work, but its gonna have quite a bit of CPU overhead for features that are not being used.

                      Comment

                      Working...
                      X