Announcement

Collapse
No announcement yet.

OpenZFS 2.2-rc3 Released With Linux 6.4 Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Oh, and if anyone is actually curious about this -- setting up a persistent L2ARC massively decreased my loading times for Death Stranding Director's Cut as well as increased how fast both it and Epic Games start. If I had to guess, loading the game went from around a 10 Mississipp count to a 3 Mississipp. Staring Epic went from a 4 Mississipp to practically instantaneously.

    I used "tar -cv /path/to/DeathStrandingDC 2>/tmp/tt |pv >/dev/null" as a way to try to fill the cache with the game. That managed to cache 51.3 of the game's 75GB. I should probably do that on my Wine prefix, too.

    /etc/modprobe.d/zfs.conf
    Code:
    options zfs zfs_max_recordsize=16777216
    options zfs l2arc_headroom=0
    options zfs l2arc_write_max=4194304000
    options zfs l2arc_write_boost=4194304000
    options zfs l2arc_rebuild_enabled=1
    options zfs l2arc_noprefetch=0​
    headroom=0 and rebuild_enabled=1 enable the persistent L2ARC
    noprefetch lets the L2ARC cache anything the ARC evicts
    write_max/boost increases the default write speed of 8mb/s to 4000mb/s because it's a damn 4500mb/s NVMe.

    Ignore max_recordsize. That has nothing to do with L2ARC. I have it set but all my pools use 1M, not 16M, like that would allow. That does nothing outside of allowing someone to set recordsize<1M during creation time which I'll probably do for future pools to store games and RAW photos. F'ing read up on that setting before you use it. Don't just blindly copy/paste my stuff.

    Those are also set in /etc/kernel/cmdline for UKIs.

    Comment


    • #32
      Originally posted by skeevy420 View Post
      Oh, and if anyone is actually curious about this -- setting up a persistent L2ARC massively decreased my loading times for Death Stranding Director's Cut as well as increased how fast both it and Epic Games start. If I had to guess, loading the game went from around a 10 Mississipp count to a 3 Mississipp. Staring Epic went from a 4 Mississipp to practically instantaneously.

      I used "tar -cv /path/to/DeathStrandingDC 2>/tmp/tt |pv >/dev/null" as a way to try to fill the cache with the game. That managed to cache 51.3 of the game's 75GB. I should probably do that on my Wine prefix, too.

      /etc/modprobe.d/zfs.conf
      Code:
      options zfs zfs_max_recordsize=16777216
      options zfs l2arc_headroom=0
      options zfs l2arc_write_max=4194304000
      options zfs l2arc_write_boost=4194304000
      options zfs l2arc_rebuild_enabled=1
      options zfs l2arc_noprefetch=0​
      headroom=0 and rebuild_enabled=1 enable the persistent L2ARC
      noprefetch lets the L2ARC cache anything the ARC evicts
      write_max/boost increases the default write speed of 8mb/s to 4000mb/s because it's a damn 4500mb/s NVMe.

      Ignore max_recordsize. That has nothing to do with L2ARC. I have it set but all my pools use 1M, not 16M, like that would allow. That does nothing outside of allowing someone to set recordsize<1M during creation time which I'll probably do for future pools to store games and RAW photos. F'ing read up on that setting before you use it. Don't just blindly copy/paste my stuff.

      Those are also set in /etc/kernel/cmdline for UKIs.
      Persistent l2arc is great, but the l2arc usecase in general depends very heavy on everyone's setup (system).
      Same for Cache/Log
      ​​​​​Only a special vdev would be actually a general recommendation, but not with 1mb recordsize, more in the range of 256k recordsize.

      I would say, if people have enough memory, it's actually faster without l2arc, if not then well yeah, l2arc is great.
      For log/cache, the issue is for example simply that normal nvme drives will die very quickly (tbh if they die on an log device, that's not an issue).

      But tbh, if people use ZFS in general on their computers, it makes actually only sense with HDD's. (Because of memory or l2arc or even cache)

      With ssds every mdadm/lvm/ext4 setup will outperform zfs by factor of 2, in read/write iops and speed.

      But if you use ssds and really have files that you don't touch for an eternity + you need a cow filesystem with snapshots and all that... Sure, ZFS is great then.
      But that's related only to a pc.

      For an Server ZFS makes surely a lot of sense, alone for samba VSS, or if you have a cluster, migration between nodes via zfs send/receive is an must have.
      And snapshots and deduplication and compression, mainly compression for an performance gain + you have usually anyway a lot of memory and you rarely restart an Server....
      That's a perfect match.

      However, good to see that people love/enjoy ZFS on personal computers, it's a great way to learn something and worth spending time with.

      Cheers :-)

      Comment


      • #33
        Originally posted by intelfx View Post
        This is all nice and well and a flashy slogan, but I'm not an enterprise with a dedicated person in charge of the planning. I want the tech to work for me, not the other way around, and ZFS does not let me do that for no reason other than a set of technical decisions that prioritise enterprise use cases at the expense of non-enterprise use cases.
        Enough with the hyperbole already. There's a native Linux filesystem designed to let you add any size single disk willy-nilly. It's called Btrfs, and the result of that design was absolute garbage "RAID" 5/6/10. Pick your poison. Your constant assertion that ZFS is only useful for a large enterprise is ridiculous. You can start with a single disk, add a second as a mirror, then start adding pairs of disks as new mirrors into your pool (RAID10). I'd hardly consider buying 2 disks at a time something only a multi-million/billion dollar company with a dedicated storage admin can handle. It's not hard to make the argument that ZFS is actually the preferred choice for consumer data hoarders or home lab geeks. Whether that means rolling your own, using FreeNAS / TrueNAS, or something like Proxmox, there's a reason for the focus on ZFS. Btrfs is fine for single disk root, but I and many others (including distro creators I'd wager) would drop it in a heartbeat and use ZFS there too if it weren't for the license / out-of-tree problem. Maybe bcachefs will finally give us an in-tree COW filesystem where RAID levels for bulk storage actually work and are performant.
        Last edited by pWe00Iri3e7Z9lHOX2Qx; 30 July 2023, 02:35 PM.

        Comment


        • #34
          Originally posted by Ramalama View Post

          Persistent l2arc is great, but the l2arc usecase in general depends very heavy on everyone's setup (system).
          Same for Cache/Log
          ​​​​​Only a special vdev would be actually a general recommendation, but not with 1mb recordsize, more in the range of 256k recordsize.

          I would say, if people have enough memory, it's actually faster without l2arc, if not then well yeah, l2arc is great.
          For log/cache, the issue is for example simply that normal nvme drives will die very quickly (tbh if they die on an log device, that's not an issue).

          But tbh, if people use ZFS in general on their computers, it makes actually only sense with HDD's. (Because of memory or l2arc or even cache)

          With ssds every mdadm/lvm/ext4 setup will outperform zfs by factor of 2, in read/write iops and speed.

          But if you use ssds and really have files that you don't touch for an eternity + you need a cow filesystem with snapshots and all that... Sure, ZFS is great then.
          But that's related only to a pc.

          For an Server ZFS makes surely a lot of sense, alone for samba VSS, or if you have a cluster, migration between nodes via zfs send/receive is an must have.
          And snapshots and deduplication and compression, mainly compression for an performance gain + you have usually anyway a lot of memory and you rarely restart an Server....
          That's a perfect match.

          However, good to see that people love/enjoy ZFS on personal computers, it's a great way to learn something and worth spending time with.

          Cheers :-)
          I totally agree with you.

          My use-case and configuration is specific enough where a persistent L2ARC makes sense over in-ram and/or more ram. My WORM data for gaming. Write Once Read Many. I'm wanting the games I'm currently playing to stay on a storage medium faster than my pool and I'd like it to stay there between boots so I don't have to populate cache on boot or before playing the game.

          Some games can be upwards of 120GB in size and require SSD or better read speeds that my zpool just couldn't provide with 3 7200 RPM HDDs or, currently, 2 of those HDDs and an SSD. In those instances, not being in-ARC means going between HDDs and ram. While more ram might help by allowing more to be in-ARC, it might not fix the underlying issue of slow pool to ram speeds when I'm loading up a new area. L2ARC changes that to between an NVMe and ram. Much faster and fixes the underlying issue of slow pool read speeds. Persistent L2ARC means I don't have to run an L2ARC population script upon boot.

          Whenever the day comes that I finally replace all my HDDs with SSDs the L2ARC probably won't be necessary since my underlying storage medium should be fast enough. It's more of an interim solution and a fun experiment since $50 on a 1TB NVMe caching drive is a lot cheaper than the best case scenario price of $150 per disk for 2 more 4TB SSDs or $120 to $200 I'd have to spend for more ram.

          Comment


          • #35
            I just set up a new workstation with a debian installed on a SSD boot drive using ext4 and mounted a ZFS NVME drive for data as my home. So far so good, we'll see how this works ou in the long run. Hoping to be able to send snapshots directly to my TrueNAS this way.

            Comment

            Working...
            X