Announcement

Collapse
No announcement yet.

XFS To Enjoy Big Scalability Boost With Linux 5.14

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by kiffmet View Post
    Does anyone know for which workloads XFS is especially suitable/viable in comparison to ext4?
    If you are a desktop user both will work just fine (ext4 is somewhat faster).

    If you are a heavy server user xfs has a number of major advantages.
    1. It can store virtually unlimited number of files. (ext4 is limited by the number of inodes).
    2, It can handle really huge file-systems (Multiple PB) effectively.

    Ext4 on the other hand:
    1. Is faster when creating and deleting a lot of small files.
    2. Somewhat faster in benchmarks (at least in my case).
    oVirt-HV1: Intel S2600C0, 2xE5-2658V2, 128GB, 8x2TB, 4x480GB SSD, GTX1080 (to-VM), Dell U3219Q, U2415, U2412M.
    oVirt-HV2: Intel S2400GP2, 2xE5-2448L, 120GB, 8x2TB, 4x480GB SSD, GTX730 (to-VM).
    oVirt-HV3: Gigabyte B85M-HD3, E3-1245V3, 32GB, 4x1TB, 2x480GB SSD, GTX980 (to-VM).
    Devel-2: Asus H110M-K, i5-6500, 16GB, 3x1TB + 128GB-SSD, F33.

    Comment


    • #22
      Originally posted by waxhead View Post

      I am not saying that this applies to you , but failure to boot is usually not BTRFS' fault or even GRUBs fault. It is usually caused by turning on feature that GRUB does not (yet) support. Many (including me) have done that mistake.

      Regarding dataloss, I have never (even with the nasty kernel 5.2 bug) lost data on a BTRFS filesystem providing that you of course have two copies of your metadata and preferably your data as well. Granted BTRFS is sensitive to data corruption if you only have a single copy of metadata, it usually turns read only which - when you think about it - is often preferable to non-checksumming filesystem behavior.

      It all depends if you are willing to let a minor corruption slide by (and let's be frank - most times people do not even notice) or if you (like me) are nuts about keeping your data consistent. My experience on Debian (testing) has been a smooth ride for years.

      I will however choose XFS if I want to run some VM's or process tons of files quickly as BTRFS is not yet optimized for that kind of work. When BTRFS learn to distribute reads across disk in raid1c4 it might be a different story
      That's why I mentioned Arch and riding on the edge. Sometimes shit happens.

      I knew that some of them were my fault like with Zstd and GRUB. If I had those problems on Ubuntu I'd feel differently.....

      But, yeah, on Arch again, BTRFS root again, I feel like I'd be foolish not to worry about shit happening again. I have a bad track record in this department.

      Comment


      • #23
        Originally posted by gilboa View Post

        If you are a desktop user both will work just fine (ext4 is somewhat faster).

        If you are a heavy server user xfs has a number of major advantages.
        1. It can store virtually unlimited number of files. (ext4 is limited by the number of inodes).
        2, It can handle really huge file-systems (Multiple PB) effectively.

        Ext4 on the other hand:
        1. Is faster when creating and deleting a lot of small files.
        2. Somewhat faster in benchmarks (at least in my case).
        ext4 also has case insensitivity which can help with Wine performance. That's a non-standard ext4 feature and you have to mkfs.ext4 manually with the flag enabled and then you set it on a per directory basis.

        Comment


        • #24
          Originally posted by skeevy420 View Post

          ext4 also has case insensitivity which can help with Wine performance. That's a non-standard ext4 feature and you have to mkfs.ext4 manually with the flag enabled and then you set it on a per directory basis.
          Indeed.
          oVirt-HV1: Intel S2600C0, 2xE5-2658V2, 128GB, 8x2TB, 4x480GB SSD, GTX1080 (to-VM), Dell U3219Q, U2415, U2412M.
          oVirt-HV2: Intel S2400GP2, 2xE5-2448L, 120GB, 8x2TB, 4x480GB SSD, GTX730 (to-VM).
          oVirt-HV3: Gigabyte B85M-HD3, E3-1245V3, 32GB, 4x1TB, 2x480GB SSD, GTX980 (to-VM).
          Devel-2: Asus H110M-K, i5-6500, 16GB, 3x1TB + 128GB-SSD, F33.

          Comment


          • #25
            Originally posted by skeevy420 View Post
            And it does quotas since that's the hot feature of the day.
            Quota was hot some 15-and-beyond years ago when systems were more tightly shared and disks were more of a constraint.
            The 20-fold price drop per gigabyte over the past 15 years more or less went together with a 20-fold increase in (lukewarm) storage. 320GB then, 12TB now (37x incr). The trash people generate in their homedir (browser cache, text notes, anything not project work related) has not risen nearly as much (and thankfully so).

            Comment


            • #26
              Originally posted by Linuxxx View Post

              Another benefit XFS has over EXT4 is that it doesn't reserve 5% of disk space by default!



              And then you have clueless Linux users complaining that 100 GB of space is simply missing on their shiny-new 2 TB NVMe SSD...
              That's very nice! Although you can tweak it on ext4 with

              sudo tune2fs -m 1 /dev/sdx (which sets it to 1%)

              but I like that XFS doesn't do that by default! Next time I'll reinstall I'll definitely choose XFS

              Comment


              • #27
                Originally posted by Vistaus View Post

                That's very nice! Although you can tweak it on ext4 with

                sudo tune2fs -m 1 /dev/sdx (which sets it to 1%)

                but I like that XFS doesn't do that by default! Next time I'll reinstall I'll definitely choose XFS
                You can even set it to 0.1%:

                Code:
                sudo tune2fs -m 0.1 /dev/sdX
                Which, by the way, is the first thing that I do when dealing with EXT4.

                Comment


                • #28
                  Originally posted by Linuxxx View Post

                  You can even set it to 0.1%:

                  Code:
                  sudo tune2fs -m 0.1 /dev/sdX
                  Which, by the way, is the first thing that I do when dealing with EXT4.
                  I read that that would cause issues if you run low on disk space. But that was a couple of years ago. Is that still a possible issue or can I safely set it to 0.1% now without worrying if I run low on disk space?

                  Comment


                  • #29
                    Originally posted by Linuxxx View Post
                    Another benefit XFS has over EXT4 is that it doesn't reserve 5% of disk space by default!



                    And then you have clueless Linux users complaining that 100 GB of space is simply missing on their shiny-new 2 TB NVMe SSD...
                    A couple points about that...
                    1. AFAIK, the space doesn't appear to be missing, does it? I thought it just kept anyone but root from writing above 95%.
                    2. A certain amount of reserved space is needed to avoid serious fragmentation. Exactly how much depends on your filesystem and workload, but I think we actually try to keep the XFS volumes below 90% capacity on our high-turnover storage. We don't use quota to enforce this. Rather, we periodically garbage collect until we have at least 10% free.

                    Comment


                    • #30
                      Originally posted by Vistaus View Post

                      I read that that would cause issues if you run low on disk space. But that was a couple of years ago. Is that still a possible issue or can I safely set it to 0.1% now without worrying if I run low on disk space?
                      It can cause issues both on HDDs and SSDs. The reason it's set to 5% is to allow free space for filesystem administrative/logistics areas which will cause issues if the drive becomes completely full. The developers have decided 5% is a reasonable default for most cases they've encountered. It's tunable because different use cases will need different amounts of free space for logistics. The specific ins and outs of what's going on is usually the purview of many chapters per filesystem in file forensics texts. Don't touch the reserved space tunables unless you know EXACTLY WHAT YOU ARE DOING! AKA you RTFM (on XFS/ext4fs/NTFS/etc) backwards and forwards. There's a second issue with SSDs, and that it's better to over-provision for wear-leveling than it is to under-provision. There is no hard and fast rule here because all SSD drive controllers (even between OEMs using the same controller but different options) have different characteristics in how they manage wear leveling.

                      TL;DR: Many filesystems store their logistical information along with data inseperably plus the journaing logs which may be anywhere on the physical filesystem layout. They may or may not have automatically reserved physical areas for housekeeping so they'll rely on the filesystems not being completely full to store their housekeeping information along with the data.
                      Last edited by stormcrow; 19 June 2021, 03:11 AM. Reason: better clarity

                      Comment

                      Working...
                      X