Announcement
Collapse
No announcement yet.
Approved: Fedora 33 Desktop Variants Defaulting To Btrfs File-System
Collapse
X
-
Originally posted by useless View Post
Actually, I was expecting Steam to pop up. Steam distribution system is a nightmare. They do a fsync after every written file! So, if apt is bad, Steam is one of the worst. Personally, I have a separate ext4 formatted disk for my Steam installation, since it won't fit my main drives anyway.
Regarding thumbnail generation: which software? I don't see how writing new files could cause IO stalls like what you're implying. I'm using btrfs in a lot of systems (most of them in a desktop/workstation role) with photos and videos in the thousands for any particular user: never seen something like that. But I don't use ubuntu (their kernel backporting and patching is garbage), and I don't use gnome.
That's a bold statement. Bad applications exists (two examples above) and btrfs does indeed suffer more in situations where an excessive number of fsync calls are being issued, but to call it bad for any kind of workloads isn't accurate. If you believe phoronix's numbers, you will see this. If you don't, btrfs developers are working on sane benchmarks; synthetics are mostly useless, in most cases.
Comment
-
Originally posted by CommunityMember View Post
I don't see a conflict here. RH is primarily focused on their (large) customers running (large (server)) systems that are not desktop oriented (sure, some people run EL7/8 on a desktop, but it is not an especially common case). And RH's primary consideration is for support of their customers, which care about the large system problems, for which btrfs is not currently a core requirement. This is equivalent to the ZFS discussion about BPR, which, in essence, is not entirely relevant for most large (server) customers, and is a contributing reason that Sun (and later Oracle) never prioritized the activity (the Solaris 11.4 method mostly sidesteps the hard work by (apparently) providing another level of indirection).
Comment
-
Originally posted by sireangelus View Post
dolphin. I mean i make a pretty generic usage of my workstation- from gaming to virtualization to office work to media work- and i can tell that it's just so much slower than when i install ext4. You can feel it. It's actually measurable.
Besides from virtualization (a known shortcoming of any CoW filesystem): would you share some numbers showing that massive slow down you're describing?
[1] I still maintain my ext4 partitioned disk that holds my Steam library, but just because Steam was (is?) bad years ago and me being lazy. My partner has a couple of games which are updated frequently and she didn't described any noticeable slow down. I watched her cursing multiple times yesterday because Valve launched four or five updates of Dota2 in a row because they broke something in the client, some of them where hundreds of MiB in size; again, no noticeable slow down.
Comment
-
Originally posted by vladpetric View Post
Not contradicting anything you're saying, but my view of RedHat is that they charge an arm and a leg to provide you with old packages, and then give you people to yell at and call it support ... Rant over.
Comment
-
Originally posted by kloczek View PostAnd most of the file systems is even using that to maintain some read ahead data caches.
Sorry but ZFS does not preform what will be allocated using exact size of the block.
ZFS additionally has dynamic allocation using (record) which is from 1Kb up to 16MB.
This is why SLAB allocator needs to be involved.
Look on slabtop command output on Linux or /proc/slabinfo content and you will be able to find that within each SLABs size of the allocation using is fixed and different.
Looks like you still don't know what SLAB allocatior is.
So you want to say that you don't know that COW transforms small random write IOs into sequential one?
None of the current filesystems has 1:1 relation to read IOs on VFS and block layer.
I have no idea how it is is with NTFS and NTFS is not subject here.
If you need Solaris you can use it for free.
Paid is support.
It is like this with Solaris and all OpenSolaris derivatives.
To download and use regular Solaris you don't need to pay anything.
The best thing you can do is just not try to max out your storage.. And the fragmentation isn't that bad of a problem because the ARC helps the performance.
You can also ghetto defragment it by moving data to a different dataset and then back but thats cumbersome. Or you can add a mirror and remove the original.
Will ZFS team add a defragment? Probably not. Personally I feel the fragmentation is low (for reason you said) and I'd much rather have some of the cool features like reflow and better dedup. It's a lot of work for little pay off. People stick on this because they remember the old days of hard disk trashing on windows.. that really isn't an issue we need to worry about.
On free Solaris.. Solaris died to me the day they closed the source code. Try the Illumos distro Omni OS, it looks very promising. Friends don't let friends use Oracle software.Last edited by k1e0x; 17 July 2020, 01:29 PM.
Comment
-
Originally posted by k1e0x View PostI don't want to burst your bubble here man.. but ZFS will fragment. Pretty much every file system will at some point. For ZFS it's fragment free till around 80-90% pool usage, after that after that it's going to fragment.
SLAB allocator allows to keep impact of that issue on completely acceptable and usually negligible level.
Most of the people using ZFS does not even monitor that metric (it is available in zdb output .. IIRC kstat provides it as well).
The best thing you can do is just not try to max out your storage.. And the fragmentation isn't that bad of a problem because the ARC helps the performance.
and ZFS is not here kind of exception.
Will ZFS team add a defragment? Probably not. Personally the fragmentation is low (for reason you said) and I'd much rather have some of the cool features like reflow and better dedup. It's a lot of work for little pay off.
On free Solaris.. Solaris died to me the day they closed the source code. Try the Illumos distro Omni OS, it looks very promising.
Comment
-
Originally posted by kloczek View PostWho said that is not fragmenting at all?
SLAB allocator allows to keep impact of that issue on completely acceptable and usually negligible level.
Most of the people using ZFS does not even monitor that metric (it is available in zdb output .. IIRC kstat provides it as well).
That scenario affects badly all filesystems .. not only ZFS.
and ZFS is not here kind of exception.
And that is all about gossips of ZFS and defragmentation.
Solaris still provides many improvements which OpenSolaris derivs will never have because cost of development.
Look at the short list on Omni OS.
KVM support, LX-Zones, Bhyve, PF, Crossbow, Zones, OpenZFS.. That is a powerful OS and took a hell of a lot of development to do..Last edited by k1e0x; 17 July 2020, 02:54 PM.
Comment
Comment