Originally posted by Ardje
View Post
Announcement
Collapse
No announcement yet.
Systemd Works On More Btrfs Functionality
Collapse
X
-
Originally posted by nanonyme View PostEspecially since afaik no sane filesystem touches data on removal, only metadata, that sounds scary. It's not as if you have anywhere near 25G metadata if you have 25G data. I'd be surprised if it was even 1G
Essence is that btrfs sucks when it comes to metadata and memory. It's fast in countless ways, so even though loosing the data hurt (you cannot fix a btrfs partition if the metadata is bigger than the total memory of the system), we continued, and retried and lost and started again and again until 3.16 or 3.17 or so, when it finally stabilized as a dirvish backup system.
So yes, there is a big reason I won't use server live production data on btrfs.
But primary backup on a dedicated system might be ok now. You can make snapshots, rsync a server, snapshot. Delete old snapshots within a second, and let btrfs delete the snapshot in the background.
On that server with the 250GB metadata, an rm -fr of a tree with hardlinked files (backup equivalent of the btrfs snapshot) on ext4 took more than 24 hours with all other I/O suspended (no backups during that time). At that moment you are mainly updating metadata/inode link counts.
I used to have a workstation with 2GB ram (which is enough for a workstation), using a /home with btrfs of 5GB. It was stable for single thread usage, but it was slow, and the metadata indices takes a lot of memory.
But I will be glad when I can use raid5 on metadata and raid1 on data or something like that in a server environment.
Comment
-
Originally posted by Ardje View PostSo and how did you create a 25G file? I can make ext4 return immediately too you know.
The essence is that most 25G are not 25G contiguous bytes. Of course you can rebalance it, so it will be, but rebalancing takes a large bite out of the I/O.
Not stating that btrfs is crap, just stating that your test might be flawed. And no SSD will help you out if you have a 25GB fragmented file. The meta data will be too big to fit in memory and btrfs will trash.
Still though, if btrfs proves to be stable at some point in time I will start using it in server production. For now it just holds my steam games on a bcache on an fcoe partition.
Oh, yeah, I wonder how systemd will handle that.
I'm actually wondering how any distribution will handle rootfs on bcache on fcoe booted from pxe, As my PC is not really used except for testing, gaming and heavy video coding, I could easily test steamos, ubuntu and debian.
It was a copy of an old VM image. Yes, it was fragmented and yes, it was on a HDD, not SSD.
If you have pathologically long rm times, maybe it could be because you are not using tiny extents and/or your btrfs is formatted with 4k blocks (which was default on old versions) instead of 16k.
Comment
-
Originally posted by jacob View PostIt was a copy of an old VM image. Yes, it was fragmented and yes, it was on a HDD, not SSD.
If you have pathologically long rm times, maybe it could be because you are not using tiny extents and/or your btrfs is formatted with 4k blocks (which was default on old versions) instead of 16k.
Comment
-
Originally posted by haplo602 View Postsorry what ? I was using btrfs on my workstation. it was a nightmare to get any amount of small files fit on almost any size btrfs partition (portage tree f.e.). I never found any sane option to stop it from wasting space in a grand way. with 16k blocks, a normal portage tree would require a 60g filesystem to fit ....
Before:
Used: 63.74GiB
After:
Used: 64.25GiB
Far away form 60GB, I would think.
Comment
-
Originally posted by haplo602 View Postsorry what ? I was using btrfs on my workstation. it was a nightmare to get any amount of small files fit on almost any size btrfs partition (portage tree f.e.). I never found any sane option to stop it from wasting space in a grand way. with 16k blocks, a normal portage tree would require a 60g filesystem to fit ....
Comment
-
Originally posted by geearf View Post1- Yes, the btrfs partition is on it, the ext4 is on a llvm cluster of standard hard drives (not in raid mode).
2- I did not, should I?
3- It was a copy from the ext4 partition.
Also, when I did the test on btrfs, on my /home partition, my system was fairly unusable :/
I'm on deadline, should I try something different?
as for preserve-root, it is in my alias for rm, though now I probably don't need to specify it anymore.
btrfs is a complex filesystem with layers and stuff
i'm also on deadline, and nodatacow
i figured it was an alias just curious
@nanonyme
i do edit big files some times, and even if i didn't i don't need COW
checksumming has nothing to do with COW
@reub2000
ye, it's probably in the background now
last time i used btrfs was around... 3.16(?), i remember it was after google/oracle/whoever said it was ready for the enterprajz
so, a rough test would be
cp/make file
sync
sudo echo 3 > /proc/sys/vm/drop_caches
date
rm file
sync
date
Comment
-
Originally posted by jacob View PostThat's nonsense. Btrfs uses tail packing so no, it would not require 60g. Besides, there is no reason why btrfs would need to waste more space than any other FS in normal circumstances and, indeed, it does not.
Comment
-
Originally posted by gens View Post2. yes, for benchmarks even the cache should be cleared as it would when restarting the computer (echo 3 > /proc/sys/vm/drop_caches)
btrfs is a complex filesystem with layers and stuff
i'm also on deadline, and nodatacow
i figured it was an alias just curious
@nanonyme
i do edit big files some times, and even if i didn't i don't need COW
checksumming has nothing to do with COW
@reub2000
ye, it's probably in the background now
last time i used btrfs was around... 3.16(?), i remember it was after google/oracle/whoever said it was ready for the enterprajz
so, a rough test would be
cp/make file
sync
sudo echo 3 > /proc/sys/vm/drop_caches
date
rm file
sync
date
Comment
-
Originally posted by haplo602 View Postwell then that does not match my experience ... maybe the version I was using was old (fs created a few years ago), but compared to other filesystems, the free space reporting was off by a lot and as I said, lot's of small files ate space like popcorn ...
Comment
Comment