Announcement

Collapse
No announcement yet.

Btrfs Brings Some Great Performance Improvements With Linux 6.1

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by guglovich View Post
    Thank you, I tried it out today. I had almost 7.45TB occupied, and still had a decent 20s mount time. When I installed the new option it was 21s, then I cleared the cache and it was 20s again. However, when I extended the partition to the end of the disk, up to 14.5TB, the effect showed itself, because the mount time remained the same, i.e. the same 20s.

    So, space_cache=v2 does not help when the free space is low, but helps when the free space is plentiful. But it seems to be impossible to make the time itself less than 20s, which is also a lot. Either you've got such a number here because the partition is not new and you'll need to recreate it with this option.
    Hmm, that's interesting. Unfortunately, I can't test that for myself as I don't have a disk more than 1.1T in size. Out of curiosity I tried searching, and found just two mentions of such problem, on a Debian ML and on reddit. Neither has a link to a bugreport or to some comment by an actual BTRFS dev, so 🤷.

    So, anyway, what I'd recommend in this situation is:

    1. I don't know what kernel version you're using, but try latest stable one to see if that maybe a resolved issue
    2. Try asking about that in btrfs IRC channel on libera server. From my experience of asking a few questions there over multiple years (well, the server was Freenode before, now it's Libera), there are actual devs hanging out, so they may give you a useful comment on the issue.
    3. Try reporting a bug. In my book it is a situation that well deserves a bugreport, as nobody would want to wait additional 20 seconds while booting their laptop/desktop.

    Comment


    • #32
      Originally posted by blackiwid View Post
      What stops me most to use this as ext4 replacement for ssds home/root is the sqlite performance without manually exclude this files/folders from the copy on write space.
      Sqlite has since many years a Write Ahead Logging mode, which is much more performant on Cow filesystems. https://wiki.tnonline.net/w/Blog/SQL...mance_on_Btrfs

      Comment


      • #33
        Originally posted by guglovich View Post

        Thank you, I tried it out today. I had almost 7.45TB occupied, and still had a decent 20s mount time. When I installed the new option it was 21s, then I cleared the cache and it was 20s again. However, when I extended the partition to the end of the disk, up to 14.5TB, the effect showed itself, because the mount time remained the same, i.e. the same 20s.

        So, space_cache=v2 does not help when the free space is low, but helps when the free space is plentiful. But it seems to be impossible to make the time itself less than 20s, which is also a lot. Either you've got such a number here because the partition is not new and you'll need to recreate it with this option.
        The free space tree (aka space_cache=v2) improves write performance when there is lots of free space fragmentation. Free space fragmentation can be remedied with balancing data block groups. If you have metadata fragmentation, then you can defrag it using `btrfs fi defrag /path/to/subvol`. You'd have to defrag each subvol separately.

        Comment


        • #34
          Originally posted by guglovich View Post

          I have the default settings for BTRFS. I could not find a manual for its fstab settings, which is available for other FS.
          It might be good to know that official documentation is provided at https://btrfs.readthedocs.io/en/latest/

          Comment


          • #35
            Originally posted by S.Pam View Post

            Sqlite has since many years a Write Ahead Logging mode, which is much more performant on Cow filesystems. https://wiki.tnonline.net/w/Blog/SQL...mance_on_Btrfs
            The problem is you have to manually activate that? You have to know where this files are etc... that's some maintaining effort and it's not a 1 time action if you start using another program that uses such sqlite file you have to activate it again.

            Sure it's better than the other solution but still, also isn't VM not also such a problem where you would need nocow for better speeds?

            Comment


            • #36
              Originally posted by blackiwid View Post

              The problem is you have to manually activate that? You have to know where this files are etc... that's some maintaining effort and it's not a 1 time action if you start using another program that uses such sqlite file you have to activate it again.

              Sure it's better than the other solution but still, also isn't VM not also such a problem where you would need nocow for better speeds?
              You only need to activate WAL once, then it's automatically used. And to be fair, most modern apps should use it by default. Some don't, and then you'd benefit to convert.

              VMs are very different because it greatly depends on how you use them. If you use btrfs inside the VM, it would be ok to use nocow or other filesystems on the host, otherwise you do risk your data. Enabling nocow also turns off data checksums, so the filesystem cannot detect corruptions. In addition, nocow even makes raid much less useful, since btrfs needs those checksums to determine which mirror is correct.

              So. Unless you really know what you are doing. Don't use nocow.
              ​​

              Comment


              • #37
                Originally posted by S.Pam View Post

                The free space tree (aka space_cache=v2) improves write performance when there is lots of free space fragmentation. Free space fragmentation can be remedied with balancing data block groups. If you have metadata fragmentation, then you can defrag it using `btrfs fi defrag /path/to/subvol`. You'd have to defrag each subvol separately.
                I don't use subvolume.

                Originally posted by S.Pam View Post

                It might be good to know that official documentation is provided at https://btrfs.readthedocs.io/en/latest/
                ​Thank you. I read it, but I must have missed it or not come across it.
                Last edited by guglovich; 12 October 2022, 04:24 PM.

                Comment


                • #38
                  Originally posted by guglovich View Post

                  I don't use subvolume.



                  ​Thank you. I read it, but I must have missed it or not come across it.
                  There is always one subvolume. The toplevel one. This is what is mounted by default if you omit the `subvol=` and `subvolid=` mount options. In this case do `btrfs filesystem defrag /`.

                  Comment


                  • #39
                    Originally posted by S.Pam View Post

                    There is always one subvolume. The toplevel one. This is what is mounted by default if you omit the `subvol=` and `subvolid=` mount options. In this case do `btrfs filesystem defrag /`.
                    I defragmented it after spending 23 hours. I won by 6 seconds. But 14 seconds is also a lot.

                    Comment


                    • #40
                      Originally posted by guglovich View Post

                      I defragmented it after spending 23 hours. I won by 6 seconds. But 14 seconds is also a lot.
                      The block group tree work in 6.1 will bring it down to a few seconds. You will have to have to convert the filesystem to this new format, but since this is just moving block group objects out of the extent tree and into their own tree and not any other significant changes it should be relatively bug free. Of course, I would probably wait until at least 6.1.5 before trying it out.

                      Comment

                      Working...
                      X