Originally posted by david-nk
View Post
Announcement
Collapse
No announcement yet.
EXT4 Lands A Nice Performance Improvement For Appending To Delalloc Files
Collapse
X
-
Last edited by linuxgeex; 06 September 2023, 08:06 PM.
- Likes 3
-
Originally posted by skeevy420 View Post
When I first came here, this was all swamp. Everyone said I was daft to build an extent based file system on a swamp, but I built in all the same, just to show them. It sank into the swamp. So I built a second one. That sank into the swamp. So I built a third. That burned down, fell over, then sank into the swamp. But the fourth one stayed up. And that's what you're going to get, Lad, the strongest extent based file system in all of England.
- Likes 2
Leave a comment:
-
Originally posted by coder View PostHow do you use de-duplication? The only way I'm aware of it being used is in snapshots.
As I've mentioned before, I view checksums as the main value that I derive from BTRFS, on my personal machines.
- Likes 1
Leave a comment:
-
Originally posted by geerge View PostIf FS-level compression/dedupe/redundancy
As I've mentioned before, I view checksums as the main value that I derive from BTRFS, on my personal machines.
Leave a comment:
-
Originally posted by mrg666 View PostThere are multiple other options but I see no reason to try another file system as ext4, and 2, 3 before that, have all been very reliable and fast for my workstations. Compiling the kernel, for example, is never limited by the file system as CPU load is always close to 100%. I don't think Abaqus or GIMP will be any faster with another file system either. It is good to see improvements are still being done on such a mature software.
Leave a comment:
-
There are multiple other options but I see no reason to try another file system as ext4, and 2, 3 before that, have all been very reliable and fast for my workstations. Compiling the kernel, for example, is never limited by the file system as CPU load is always close to 100%. I don't think Abaqus or GIMP will be any faster with another file system either. It is good to see improvements are still being done on such a mature software.
Leave a comment:
-
Originally posted by digitaltrails View PostAnother aspect to consider is recoverability. I can't track down the reference, but I recall reading XFS was best used with a UPS, where as ext4 was generally OK without one (or at least remained recoverable via fsck). That was years ago, so it may well be XFS has improved since then.
That ext4 is mature and quite recoverable was further backed up by papers such as Shehbaz Jaffer, Stathis Maneas, Andy Hwang, and Bianca Schroeder, Evaluating File System Reliability on Solid State Drives (https://www.usenix.org/conference/at...ntation/jaffer). Their paper only considered btrfs, ext2 and F2FS, but the recoverability of ext4 turned out to be quite robust.
I believe the main reason we use XFS, at my job, is for combating filesystem fragmentation. If you leave about 5% to 10% of free space, that's plenty for XFS to ensure fragmentation doesn't become too bad. On mechanical hard disks, fragmentation poses a serious performance issue.
Originally posted by digitaltrails View PostThis evening, while searching for the above reference, I also found this paper: Anthony Rebello, Yuvraj Patel, Ramnatthan Alagappan, Andrea C. Arpaci-Dusseau, and Remzi H. Arpaci-Dusseau, Can Applications Recover from fsync Failures? (https://www.usenix.org/conference/at...tation/rebello). I've only briefly scanned it, it seems quite interesting.
Originally posted by digitaltrails View PostMy interest is purely from the perspective of a desktop user with about 2 TB of for /home on a single SSD, no UPS, and root on a separate smaller SSD, rotating media used for on and offline backups only. A lot may have changed since I last examined the issue, but I've stuck with ext4 because of reliability, not speed. By reliability, I mean a fsck has always managed to return me to a mountable file system, at which point it's easy to make comparisons with the various backups.
At my job, we make use of BTRFS' ability to do atomic snapshots - when making backups, we first make a snapshot and then backup that. This has two advantage:- minimizing the chance of inconsistencies, due to the backup process reading different files at different times.
- letting you know exactly what was backed up, as a consequence, enabling you to verify the backup!
For relatively low-turnover data, snapshots also serve as a type of backup that protects against mostly user error (e.g. "whoops, deleted wrong file/directory!"). We use hourly snapshots, on a departmental fileserver, for this purpose.Last edited by coder; 04 September 2023, 08:20 AM.
- Likes 1
Leave a comment:
-
Originally posted by digitaltrails View Postit may well be XFS has improved since then.
I can say from experience that XFS in RHEL 8 does not have this particular flaw where any system reset causes data loss. I haven't bothered with investigating the writeback data loss scenarios since I gave up on XFS completely and would never use it no matter how much FUD Red Hat continues to peddle about EXT4.
Originally posted by digitaltrails View PostI've stuck with ext4 because of reliability, not speed.
- Likes 3
Leave a comment:
-
Originally posted by Malsabku View PostCan someone explain me the reason to use EXT4, when XFS is faster and has more features?
That ext4 is mature and quite recoverable was further backed up by papers such as Shehbaz Jaffer, Stathis Maneas, Andy Hwang, and Bianca Schroeder, Evaluating File System Reliability on Solid State Drives (https://www.usenix.org/conference/at...ntation/jaffer). Their paper only considered btrfs, ext2 and F2FS, but the recoverability of ext4 turned out to be quite robust.
This evening, while searching for the above reference, I also found this paper: Anthony Rebello, Yuvraj Patel, Ramnatthan Alagappan, Andrea C. Arpaci-Dusseau, and Remzi H. Arpaci-Dusseau, Can Applications Recover from fsync Failures? (https://www.usenix.org/conference/at...tation/rebello). I've only briefly scanned it, it seems quite interesting.
My interest is purely from the perspective of a desktop user with about 2 TB of for /home on a single SSD, no UPS, and root on a separate smaller SSD, rotating media used for on and offline backups only. A lot may have changed since I last examined the issue, but I've stuck with ext4 because of reliability, not speed. By reliability, I mean a fsck has always managed to return me to a mountable file system, at which point it's easy to make comparisons with the various backups.
- Likes 5
Leave a comment:
Leave a comment: