Announcement

Collapse
No announcement yet.

Systemd Works On More Btrfs Functionality

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by Ardje View Post
    So and how did you create a 25G file? I can make ext4 return immediately too you know.
    The essence is that most 25G are not 25G contiguous bytes. Of course you can rebalance it, so it will be, but rebalancing takes a large bite out of the I/O.
    Not stating that btrfs is crap, just stating that your test might be flawed. And no SSD will help you out if you have a 25GB fragmented file. The meta data will be too big to fit in memory and btrfs will trash.
    Still though, if btrfs proves to be stable at some point in time I will start using it in server production. For now it just holds my steam games on a bcache on an fcoe partition.
    Oh, yeah, I wonder how systemd will handle that.
    I'm actually wondering how any distribution will handle rootfs on bcache on fcoe booted from pxe, As my PC is not really used except for testing, gaming and heavy video coding, I could easily test steamos, ubuntu and debian.
    Especially since afaik no sane filesystem touches data on removal, only metadata, that sounds scary. It's not as if you have anywhere near 25G metadata if you have 25G data. I'd be surprised if it was even 1G

    Comment


    • #32
      Originally posted by nanonyme View Post
      Especially since afaik no sane filesystem touches data on removal, only metadata, that sounds scary. It's not as if you have anywhere near 25G metadata if you have 25G data. I'd be surprised if it was even 1G
      I think I had a 16T partition with 250+GB of metadata and 8T of data once. And then btrfs crashed on itself.
      Essence is that btrfs sucks when it comes to metadata and memory. It's fast in countless ways, so even though loosing the data hurt (you cannot fix a btrfs partition if the metadata is bigger than the total memory of the system), we continued, and retried and lost and started again and again until 3.16 or 3.17 or so, when it finally stabilized as a dirvish backup system.
      So yes, there is a big reason I won't use server live production data on btrfs.
      But primary backup on a dedicated system might be ok now. You can make snapshots, rsync a server, snapshot. Delete old snapshots within a second, and let btrfs delete the snapshot in the background.
      On that server with the 250GB metadata, an rm -fr of a tree with hardlinked files (backup equivalent of the btrfs snapshot) on ext4 took more than 24 hours with all other I/O suspended (no backups during that time). At that moment you are mainly updating metadata/inode link counts.
      I used to have a workstation with 2GB ram (which is enough for a workstation), using a /home with btrfs of 5GB. It was stable for single thread usage, but it was slow, and the metadata indices takes a lot of memory.
      But I will be glad when I can use raid5 on metadata and raid1 on data or something like that in a server environment.

      Comment


      • #33
        Originally posted by Ardje View Post
        So and how did you create a 25G file? I can make ext4 return immediately too you know.
        The essence is that most 25G are not 25G contiguous bytes. Of course you can rebalance it, so it will be, but rebalancing takes a large bite out of the I/O.
        Not stating that btrfs is crap, just stating that your test might be flawed. And no SSD will help you out if you have a 25GB fragmented file. The meta data will be too big to fit in memory and btrfs will trash.
        Still though, if btrfs proves to be stable at some point in time I will start using it in server production. For now it just holds my steam games on a bcache on an fcoe partition.
        Oh, yeah, I wonder how systemd will handle that.
        I'm actually wondering how any distribution will handle rootfs on bcache on fcoe booted from pxe, As my PC is not really used except for testing, gaming and heavy video coding, I could easily test steamos, ubuntu and debian.

        It was a copy of an old VM image. Yes, it was fragmented and yes, it was on a HDD, not SSD.
        If you have pathologically long rm times, maybe it could be because you are not using tiny extents and/or your btrfs is formatted with 4k blocks (which was default on old versions) instead of 16k.

        Comment


        • #34
          Originally posted by jacob View Post
          It was a copy of an old VM image. Yes, it was fragmented and yes, it was on a HDD, not SSD.
          If you have pathologically long rm times, maybe it could be because you are not using tiny extents and/or your btrfs is formatted with 4k blocks (which was default on old versions) instead of 16k.
          sorry what ? I was using btrfs on my workstation. it was a nightmare to get any amount of small files fit on almost any size btrfs partition (portage tree f.e.). I never found any sane option to stop it from wasting space in a grand way. with 16k blocks, a normal portage tree would require a 60g filesystem to fit ....

          Comment


          • #35
            Originally posted by haplo602 View Post
            sorry what ? I was using btrfs on my workstation. it was a nightmare to get any amount of small files fit on almost any size btrfs partition (portage tree f.e.). I never found any sane option to stop it from wasting space in a grand way. with 16k blocks, a normal portage tree would require a 60g filesystem to fit ....
            I don't know how to display the blocksize, but since I formatted this device with btrfs and default settings a month or so ago running Arch on it I assume that the blocksize is 16k. So this is the difference after unpacking a freshly downloaded portage-latest.tar.xz.
            Before:
            Used: 63.74GiB
            After:
            Used: 64.25GiB

            Far away form 60GB, I would think.

            Comment


            • #36
              Originally posted by haplo602 View Post
              sorry what ? I was using btrfs on my workstation. it was a nightmare to get any amount of small files fit on almost any size btrfs partition (portage tree f.e.). I never found any sane option to stop it from wasting space in a grand way. with 16k blocks, a normal portage tree would require a 60g filesystem to fit ....
              That's nonsense. Btrfs uses tail packing so no, it would not require 60g. Besides, there is no reason why btrfs would need to waste more space than any other FS in normal circumstances and, indeed, it does not.

              Comment


              • #37
                Originally posted by geearf View Post
                1- Yes, the btrfs partition is on it, the ext4 is on a llvm cluster of standard hard drives (not in raid mode).
                2- I did not, should I?
                3- It was a copy from the ext4 partition.

                Also, when I did the test on btrfs, on my /home partition, my system was fairly unusable :/
                I'm on deadline, should I try something different?

                as for preserve-root, it is in my alias for rm, though now I probably don't need to specify it anymore.
                2. yes, for benchmarks even the cache should be cleared as it would when restarting the computer (echo 3 > /proc/sys/vm/drop_caches)
                btrfs is a complex filesystem with layers and stuff

                i'm also on deadline, and nodatacow
                i figured it was an alias just curious


                @nanonyme
                i do edit big files some times, and even if i didn't i don't need COW
                checksumming has nothing to do with COW

                @reub2000
                ye, it's probably in the background now
                last time i used btrfs was around... 3.16(?), i remember it was after google/oracle/whoever said it was ready for the enterprajz

                so, a rough test would be

                cp/make file
                sync
                sudo echo 3 > /proc/sys/vm/drop_caches
                date
                rm file
                sync
                date

                Comment


                • #38
                  Originally posted by jacob View Post
                  That's nonsense. Btrfs uses tail packing so no, it would not require 60g. Besides, there is no reason why btrfs would need to waste more space than any other FS in normal circumstances and, indeed, it does not.
                  well then that does not match my experience ... maybe the version I was using was old (fs created a few years ago), but compared to other filesystems, the free space reporting was off by a lot and as I said, lot's of small files ate space like popcorn ...

                  Comment


                  • #39
                    Originally posted by gens View Post
                    2. yes, for benchmarks even the cache should be cleared as it would when restarting the computer (echo 3 > /proc/sys/vm/drop_caches)
                    btrfs is a complex filesystem with layers and stuff

                    i'm also on deadline, and nodatacow
                    i figured it was an alias just curious


                    @nanonyme
                    i do edit big files some times, and even if i didn't i don't need COW
                    checksumming has nothing to do with COW

                    @reub2000
                    ye, it's probably in the background now
                    last time i used btrfs was around... 3.16(?), i remember it was after google/oracle/whoever said it was ready for the enterprajz

                    so, a rough test would be

                    cp/make file
                    sync
                    sudo echo 3 > /proc/sys/vm/drop_caches
                    date
                    rm file
                    sync
                    date
                    Please read Btrfs wiki on mount options before using it further. Nodatacow implies nodatasum so no checksums for new files

                    Comment


                    • #40
                      Originally posted by haplo602 View Post
                      well then that does not match my experience ... maybe the version I was using was old (fs created a few years ago), but compared to other filesystems, the free space reporting was off by a lot and as I said, lot's of small files ate space like popcorn ...
                      Well, there's filesystems and filesystems. We had a 500GB Ext4 at work that was migrated from Ext3. We ran out of space and there was no LVM so we creates a new Ext4 disk that was on top of LVM and copied all the files over. End result was files took several dozen gigabytes less space on the new Ext4 than on the old Ext4. I'm expecting most was from original poor use of extents to store small files.

                      Comment

                      Working...
                      X