Announcement

Collapse
No announcement yet.

Linux 5.14 SSD Benchmarks With Btrfs vs. EXT4 vs. F2FS vs. XFS

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • brucethemoose
    replied
    Originally posted by Sakuretsu View Post
    After seeing this F2FS is really picking my interest now.
    Originally posted by Azrael5 View Post
    It looks like F2Fs be the best solution for SSD and I assume USB as well.
    I have been using it for over a year, it rocks. BTRFS-like features (checksumming, compression, casefolding, native encryption, and so on, albeit no snapshots or raid) with basically no speed compromises, or write amplification like btrfs. It handles torturous workloads (like writing a million pngs) better than ext4 as well.


    My biggest issue is that its default flags are suboptimal, and its missing clearer documentation. For example, I had to figure out theres a 16 extension limit for the compression whitelist/blacklist from the source code, and I *still* can't figure out why compress_cache refuses to work.
    Last edited by brucethemoose; 16 November 2021, 03:23 PM.

    Leave a comment:


  • brucethemoose
    replied
    Originally posted by Azrael5 View Post
    It looks like F2Fs be the best solution for SSD and I assume USB as well.
    I have been using it for over a year, it rocks. BTRFS-like features (checksumming, compression, casefolding, native encryption, and so on, albeit no snapshots) with basically no speed compromises, or write amplification like btrfs.


    My biggest issue is that its default flags are suboptimal, and its missing clearer documentation. For example, I had to figure out theres a 16 extension limit for the whitelist/blacklist from the source code, and I *still* can't figure out why compress_cache refuses to be enabled.

    Leave a comment:


  • fkoehler
    replied
    Originally posted by curfew View Post
    Taking a snapshot doesn't copy any data. The copying happens on-demand at the next time when you are changing the file. So the copying will always happen when you are actively using the computer and therefore you take the performance hit.

    Taking a single snapshot on a daily basis is "frequent" in the same sense as darkbasic used it. Frequently would be a few times each day or even hourly similar to Apple's Time Machine.
    I kind of see your point, but have you actually measured, if this affects you in any meaningful way? Don't most file formats with content need to do a full copy on save anyways? Like Video, JPEG and co, all office formats.... So for Joe Averageuser doing actual work, there would be very little difference. Only productivity use case I would personally remember from my thesis are Hyperchem molecular dynamics trajectories that had a fixed size binary format that could be (ab)used as a pseudo "structure database". Very niche.

    Then there would be databases which most people will use in nodatacow or have good reasons to go slow and safe.

    Only thing I can see that actually could matter would be humongously large logfiles which actually are written in O_APPEND mode. Which would only be slow for each first write after snapshot. So if you snapshot in 10 minute intervalls and have about 10 transactions per second, that cause a write in log, this would mean less than 0,1 % of your transactions get slowed down.

    Premature optimization is the root ....

    Leave a comment:


  • coder
    replied
    Originally posted by fkoehler View Post
    Well, you can do snapshots during the night hours and weekends,
    Snapshots aren't the problem. They're basically free. We use snapperd to make hourly snapshots on our departmental fileserver, which makes them actually useful!

    The overhead that comes with snapshots is when they're deleted. And I don't know if snapperd added any options to schedule that off-hours, but that's the bit you'd want to schedule.

    Originally posted by fkoehler View Post
    Also, correct me if I'm wrong, but while theoretically the nodatacow files should not get deduplicated during snapshot, practically files that deserve "nodatacow" should be heavily used and pretty much always change between snaps, so there should be no difference in disk space requirements ...
    Depends on how often you do snapshots.

    Anyway, my solution is to create a subvolume with snapshots disabled, and try to get all the high-turnover stuff located there. The thing about high-turnover data is that it also tends to be low value. So, there's less benefit in snapshotting it, anyway.

    An exception to this might be databases, although they each have their own backup mechanism.

    Leave a comment:


  • pracedru
    replied
    Originally posted by bug77 View Post

    Not much "wow!" there when you realize it's 12yo.
    Making great software takes time.
    Linux also took its time.
    Windows wasn't really usefull for anything before version 3.11 and then it wasn't really any good.
    Mac OS didn't have real multitasking before OSX.
    EXT file system wasn't really adopted on the mainstream before ext3.

    But I am just glad that BTRFS is where it is now.
    Last edited by pracedru; 05 September 2021, 05:03 AM.

    Leave a comment:


  • curfew
    replied
    Originally posted by pal666 View Post
    no, the context was that nobody should enable cow for databases
    Or that enabling COW for databases must be a conscious decision and at that point the relative performance to non-COW filesystems becomes meaningless.

    Leave a comment:


  • pal666
    replied
    Originally posted by S.Pam View Post
    yes, you don't get self-healing if db doesn't provide it. but you don't get self-healing on ext4 and it requires raid1, i.e. doesn't apply to current benchmark anyway

    Leave a comment:


  • pal666
    replied
    Originally posted by F.Ultra View Post
    No the context was that databases carried internal checksums and could catch data errors
    no, the context was that nobody should enable cow for databases

    Leave a comment:


  • sinepgib
    replied
    Originally posted by curfew View Post
    Taking a snapshot doesn't copy any data. The copying happens on-demand at the next time when you are changing the file. So the copying will always happen when you are actively using the computer and therefore you take the performance hit.

    Taking a single snapshot on a daily basis is "frequent" in the same sense as darkbasic used it. Frequently would be a few times each day or even hourly similar to Apple's Time Machine.
    I could probably look for it myself, but you seem to know these things. I'd expect that to also only point to a different block in the changed parts when no shifts occur. Is that how it's implemented?

    Leave a comment:


  • S.Pam
    replied
    Originally posted by pal666 View Post
    nodatacow doesn't disable snapshots. you only enable it for database files because they do cow themselves. most of your files are not databases and btrfs is reasonably fast with them
    See what happens. https://www.phoronix.com/forums/foru...90#post1276490

    Leave a comment:

Working...
X