Announcement

Collapse
No announcement yet.

Btrfs & XFS File-Systems See More Fixes With Linux 5.4

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Royi View Post
    Those who write they would use XFS. Why do you prefer it over EXT4?
    simply because that's where the major development efforts are nowadays (RedHat).
    not that ext4 is bad, anyway


    Comment


    • #12
      Originally posted by Royi View Post
      Those who write they would use XFS. Why do you prefer it over EXT4?
      I stopped using EXT4 about 4 years ago and every now and then test it. For me on both SSD's, NVMe and HDD it's XFS all the way. Super reliable, doesn't waste space like EXT3/4. (Format a drive and see how much is free space while it's empty...) I copied about 600GB data on my drive and had 200GB free, XFS had almost 350GB with the same drive / data.

      If I want checksumming I use ZFS - NOT BTRFS as I have too many data issues with it, it certainly doesn't like power outs...directories being marked as readonly only way to fix is recreate entire FS etc. I did read somewhere that XFS will be getting checksumming so that will be awesome. Then it's just ZFS for my server (snapshots, raid)

      The heading of this article is an oxymoron. "The mature XFS and Btrfs". Should read "The mature XFS and immature Btrfs"
      Last edited by dfyt; 20 September 2019, 06:53 AM.

      Comment


      • #13
        Originally posted by dfyt View Post

        I stopped using EXT4 about 4 years ago and every now and then test it. For me on both SSD's, NVMe and HDD it's XFS all the way. Super reliable, doesn't waste space like EXT3/4. (Format a drive and see how much is free space while it's empty...) I copied about 600GB data on my drive and had 200GB free, XFS had almost 350GB with the same drive / data.

        If I want checksumming I use ZFS - NOT BTRFS as I have too many data issues with it, it certainly doesn't like power outs...directories being marked as readonly only way to fix is recreate entire FS etc. I did read somewhere that XFS will be getting checksumming so that will be awesome. Then it's just ZFS for my server (snapshots, raid)

        The heading of this article is an oxymoron. "The mature XFS and Btrfs". Should read "The mature XFS and immature Btrfs"
        are your bad experiences with btrfs based on recent kernels? it has actually improved a lot lately.

        I have several btrfs server (all in raid 1 configuration) and some of them survived without losing a single bit to really really REALLY bad events (including power failures)
        Last edited by cynic; 20 September 2019, 07:06 AM.

        Comment


        • #14
          Originally posted by dfyt View Post
          I stopped using EXT4 about 4 years ago and every now and then test it. For me on both SSD's, NVMe and HDD it's XFS all the way. Super reliable, doesn't waste space like EXT3/4. (Format a drive and see how much is free space while it's empty...) I copied about 600GB data on my drive and had 200GB free, XFS had almost 350GB with the same drive / data.
          Ext4 by default will reserve 5% of the space for root user by default. Ok it's not as much as you claim is "wasted space" above but it's still significant and worth mentioning

          Let me make some quick math here 600+350 is 950GB of "available space" on the drive. 950/100*5= 47.5GB of reserved space. That's 50GB locked down for nothing.

          use
          tune2fs -m 0 /dev/sdXY

          on the ext4 partition to remove this reservation. (or use OpenSUSE partitioner, that allows to specify this setting when you create the partition)

          Comment


          • #15
            Originally posted by starshipeleven View Post
            Ext4 by default will reserve 5% of the space for root user by default. Ok it's not as much as you claim is "wasted space" above but it's still significant and worth mentioning

            Let me make some quick math here 600+350 is 950GB of "available space" on the drive. 950/100*5= 47.5GB of reserved space. That's 50GB locked down for nothing.

            use
            tune2fs -m 0 /dev/sdXY

            on the ext4 partition to remove this reservation. (or use OpenSUSE partitioner, that allows to specify this setting when you create the partition)
            Thanks, I've done that in the pars. My concern was now that I would now have to tune all my EXT4 setups, externals etc. I'm more concerned with the OOB experience and for that XFS has been stellar and superb with dealing with my content. I have never lost a single byte or had corrupt filesystems. We have many power failures and despite a UPS (batteries can fail) I have good exp with EXT but not as good when compared to XFS.

            As for BTRFS I haven't tested on recent kernels. My use case would be raid 5 first and from what I've seen very little has been done there. My corruption issues happened with straight filesystems no raid at all and if it failed there...I WISH I could trust it as ZFS's main bugbear is having to copy everything off the raid and back on when you expand. BTRFS rocks here.
            Last edited by dfyt; 20 September 2019, 09:43 AM.

            Comment


            • #16
              Originally posted by Royi View Post
              Those who write they would use XFS. Why do you prefer it over EXT4?
              ext4 has burned me a few times over the last several years, and although there's no guarantee XFS will necessarily be better, I also just wanted to try something different.

              Comment


              • #17
                On my home pc I have been using Btrfs for / and Xfs for data for years, never had any problems.
                I have recently discovered that computers in the office also use Btrfs and Xfs as they all use SLE.

                Comment


                • #18
                  Originally posted by dfyt View Post
                  If I want checksumming I use ZFS - NOT BTRFS as I have too many data issues with it, it certainly doesn't like power outs...directories being marked as readonly only way to fix is recreate entire FS etc.
                  it has nothing to do with power outs, it was filesystem corruption. power outs do not corrupt btrfs, it is copy on write

                  Comment


                  • #19
                    Originally posted by pal666 View Post
                    it has nothing to do with power outs, it was filesystem corruption. power outs do not corrupt btrfs, it is copy on write
                    Actually, the corruption I had with btrfs just last month was entirely caused by the laptop waking up in my bag and running its battery down. The SSD then returned old data that should have been overwritten, and the btrfs metadata was unrecoverable.

                    Comment


                    • #20
                      Originally posted by Zan Lynx View Post
                      Actually, the corruption I had with btrfs just last month was entirely caused by the laptop waking up in my bag and running its battery down. The SSD then returned old data that should have been overwritten, and the btrfs metadata was unrecoverable.
                      btrfs doesn't overwrite data, it writes new data at free space and then changes pointer to point to new data (recursively writing at free space and changing higher level pointer). that's by design, but obviously hardware shouldn't be faulty like not report it has finished when data are still in cache (in that case new pointer can point at unwritten garbage)
                      Last edited by pal666; 26 September 2019, 05:47 PM.

                      Comment

                      Working...
                      X