Announcement

Collapse
No announcement yet.

Bcachefs Prepares Last Minute Fixes For Linux 6.7

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by AndyChow View Post
    ZFS can't do write cache (not in a real way), and that's a major issue.
    Well, even though several (either loosely or completely unrelated) things keep me from using ZFS (on Linux but I don't use Solaris o r FreeBSD anyways) instead of Btrfs, I personally find the new(-ish) "persistent L2ARC" much more promising than simple write(-through/around) caching of bcachefs.

    I want the filesystem to keep a "smart" cache, preferably with user-tunable paramteres that control:

    1: the weight of metadata over data (the option to allocate a fairly big chunk of the cache space for filesystem metadata, even if that wasn't touched for a long time and relatively rarely even then)

    2: rather than "fresh in, old out" or "fresh and mainly random in, old-touched mainlyrandom/whatever out", I want some kind of database-like accounting about what's accessed not only often but with "importance" (I mean, if accessing something on the SSD cache instead of the HDD backing device make the application executing much faster then don't drop it from the cache too eagerly just to strore something fresh that might never really help to be cachced).

    Yes, I am aware #1 and #2 technically have not only a fair amount of overlap but also some conflict (that's why I want a tunable "partition metadata/data tunable) and the last part of #2 is probably asking for a little bit too much but that's relatively close to my current understanding of how "L2ARC works" in a nutshell (although I never tested that for myself extensively, specially not the permanent L2ARC that didn't even exist back then).
    Even though my peak sequential speeds took a deep dive due to eventual fragmentation [yes, I know I could have tried working around that issue but I didn't want to fight with my filesystem], even working with high-bandwidth video files felt smoother (like seeking through the timeline) with ZFS compared to Btrfs (regardless of the significantly higher peak throughput after regular nightly defragmentation cronjob on Btrfs). But I guess that's partly duo to ZFS's I/O scheduler, not just it's ARC caching and general Linux in-tree schedulers also improved since then. (But hey, why is it so hard to try and re-implement something like ZFS's scheduler and ARC on mainline Linux....?)

    However, I have this crazy idea... Since mainline Linux already has tunables to prefer filesystem metadata over filesystem data in the RAM cache !AND! tunables to control swappiness !AND! swapfiles/partitions are mostly on permanent storage (HDD/SSD/Optane/etc) why can't we just have a filesystem agnostic "ZFS persistent L2ARC" kind-of-thing for the Linux kernel in general (for any and all compatible filesystem)? (Make it a separate file/partition/drive for all I care, probably even better, so I can use a relatively small and cheap Optane drive for that "fsswap" even if I don't wish to use a regular swap. Hell, make it RAID-0 like, so I can just attach to dist-cheap Optane M10 drives in cheap USB enclosures... = lightspeed read-only metadata access of HDD-backed filesystems for me...).
    Last edited by janos666; 07 January 2024, 06:46 PM.

    Comment

    Working...
    X