Announcement

Collapse
No announcement yet.

An Exciting Btrfs Update With Encoded I/O, Fsync Performance Improvements

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • skeevy420
    replied
    Originally posted by Developer12 View Post
    It's going to be real fun to see the BTRFS reactions when OpenZFS lands the patchset for adding disks to a RAID-5/6 later this year.

    No only is RAID-5/6 terminally broken on BTRFS, but expansion has been probably the only real feature BTRFS has had over ZFS.

    At this rate somebody'll evolve RedoxFS into the next ZFS replacement and merge that into the kernel before BTRFS's RAID is usable. In other words: the end of the universe.
    If you're crazy like me you can do ZFS expansion now without the loss of space the upcoming method will have.

    Leave a comment:


  • skeevy420
    replied
    Originally posted by reza View Post

    Do you use EXT4 for your root and ZFS for other partitions? Can you explain a bit more and share how your hard disk structure is? Thanks!
    Sorry, been away for a few days and I have over 2000 notification in my inbox here.

    Yes. I'm currently using use the standard Manjaro ext4 setup and systemd mounts my ZFS stuff under /zeta/blah/yada. Unless I'm trying some esoteric setup out, my standard install method is to just go with the distribution defaults and install a zfs-dkms package. Due to using the dkms packages, if I'm using a distribution like Fedora or Arch that updates the kernel faster than OpenZFS has releases I'll also install a linux-lts package or compile my own so I don't upgrade and lose access to my non-root data (I did the same thing a decade ago when I had my last Nvidia GPU -- I consider it good practice to have a backup kernel when I use and rely on out of tree modules).

    My current disk structure is Linux on a 480GB SDD, a ZFS zraid using 3 4TB HDDs, and a 1TB NVMe with Windows that hasn't been booted in a month or so. I've been considering wiping Windows and using that disk to hack together SteamOS on a ZFS root.

    The way I see it: Valve has the January Arch ISO on their mirror. I reckon I could just use that ISO, change the Arch repos to Valve's, add some keyrings, do a standard Arch install, add some SteamOS pakcages, and BAM: SteamOS 3.

    Here's my actual ZFS mountpoints to give you an idea of how I use it for my desktop stuff:

    Code:
    NAME                         PROPERTY     VALUE                   SOURCE
    zeta                         mountpoint   /zeta                   default
    zeta                         compression  lz4                     local
    zeta/layer                   mountpoint   /zeta/layer             default
    zeta/layer                   compression  lz4                     inherited from zeta
    zeta/layer/documents         mountpoint   /zeta/documents         local
    zeta/layer/documents         compression  zstd-19                 local
    zeta/layer/games             mountpoint   /zeta/games             local
    zeta/layer/games             compression  lz4                     inherited from zeta
    zeta/layer/games/emulation   mountpoint   /zeta/games/emulation   local
    zeta/layer/games/emulation   compression  zstd-19                 local
    zeta/layer/games/pc          mountpoint   /zeta/games/pc          local
    zeta/layer/games/pc          compression  lz4                     inherited from zeta
    zeta/layer/games/pc/windows  mountpoint   /zeta/games/pc/windows  local
    zeta/layer/games/pc/windows  compression  lz4                     inherited from zeta
    zeta/layer/music             mountpoint   /zeta/music             local
    zeta/layer/music             compression  zstd-19                 local
    zeta/layer/pictures          mountpoint   /zeta/pictures          local
    zeta/layer/pictures          compression  zstd-19                 local
    zeta/layer/programs          mountpoint   /zeta/programs          local
    zeta/layer/programs          compression  lz4                     inherited from zeta
    zeta/layer/programs/linux    mountpoint   /zeta/programs/linux    local
    zeta/layer/programs/linux    compression  lz4                     inherited from zeta
    zeta/layer/programs/storage  mountpoint   /zeta/programs/storage  local
    zeta/layer/programs/storage  compression  zstd-19                 local
    zeta/layer/programs/windows  mountpoint   /zeta/programs/windows  local
    zeta/layer/programs/windows  compression  lz4                     inherited from zeta
    zeta/layer/projects          mountpoint   /zeta/projects          local
    zeta/layer/projects          compression  lz4                     inherited from zeta
    zeta/layer/videos            mountpoint   /zeta/videos            local
    zeta/layer/videos            compression  zstd-19                 local
    As you can see I alternate between speedy LZ4 and Jesus Christ Why ZSTD-19. Everything 19 is stuff that will only ever be written the one time so I give it the one time overkill pass to compress as much as possible. All the LZ4 stuff is read/write data. Both open LZ4 and ZSTD open files extremely fast so I consider them to be the best choices for read/write and write once data.
    Last edited by skeevy420; 26 March 2022, 08:11 AM.

    Leave a comment:


  • intelfx
    replied
    Originally posted by rleigh View Post

    Loop devices are not even close to ZFS zvols in terms of the features offered. They aren't just exposing a file as a device node. They are based on the ZFS DSL layer just like ZFS datasets, supporting nearly all of the properties you can set on datasets such as copies=n, compression, encryption, logbias etc. And being based on the DSL they support copy-on-write transactions just like dataset writes, so you can snapshot them, clone them, send/recv them etc., just like datasets. You can use them as the backing storage of virtual machines and then continuously and transparently snapshot them and offload the VM state while it's running, for example. Or clone it and fire up a new VM based on the old one, all while the old one is running. You can't do that with loopback devices.
    More buzzwords.

    In btrfs, you just create the backing file on a separate subvol. Bam, problem solved. No buzzwords needed.
    Last edited by intelfx; 24 March 2022, 10:26 PM.

    Leave a comment:


  • NobodyXu
    replied
    Originally posted by rleigh View Post

    Loop devices are not even close to ZFS zvols in terms of the features offered. They aren't just exposing a file as a device node. They are based on the ZFS DSL layer just like ZFS datasets, supporting nearly all of the properties you can set on datasets such as copies=n, compression, encryption, logbias etc. And being based on the DSL they support copy-on-write transactions just like dataset writes, so you can snapshot them, clone them, send/recv them etc., just like datasets. You can use them as the backing storage of virtual machines and then continuously and transparently snapshot them and offload the VM state while it's running, for example. Or clone it and fire up a new VM based on the old one, all while the old one is running. You can't do that with loopback devices.
    Doesn't loopback device just act like a regular file?

    Thus, the compression is applied as usual, same for CoW.

    As for file cloning, you can do that using cp reflink.

    Leave a comment:


  • rleigh
    replied
    Originally posted by intelfx View Post

    There absolutely is, it just doesn't have a fancy overhyped name. Linux had support for creating loop devices for ages.
    Loop devices are not even close to ZFS zvols in terms of the features offered. They aren't just exposing a file as a device node. They are based on the ZFS DSL layer just like ZFS datasets, supporting nearly all of the properties you can set on datasets such as copies=n, compression, encryption, logbias etc. And being based on the DSL they support copy-on-write transactions just like dataset writes, so you can snapshot them, clone them, send/recv them etc., just like datasets. You can use them as the backing storage of virtual machines and then continuously and transparently snapshot them and offload the VM state while it's running, for example. Or clone it and fire up a new VM based on the old one, all while the old one is running. You can't do that with loopback devices.

    Leave a comment:


  • mether
    replied
    Originally posted by pWe00Iri3e7Z9lHOX2Qx View Post
    At the end of the day, ZFS is used for huge storage arrays of mission critical data in production. Nobody in their right mind would do that with BTRFS.
    This is an incorrect assertion. https://btrfs.wiki.kernel.org/index....oduction_Users

    Leave a comment:


  • darkbasic
    replied
    "The Btrfs VFS code now allows reflinks and deduplication from two different mounts of the same file-system"

    I can't believe it finally happened!

    Leave a comment:


  • intelfx
    replied
    Originally posted by portablenuke View Post

    Things ZFS can do BTRFS can't: exporting space as a block device, parity RAID, convert a folder into a dataset, have nice tools.

    ZFS has the ability to expose space in the pool as a block device. Creating an 8GB ZFS volume is analogous to creating an 8GB logical volume with LVM. BTRFS doesn't have this ability.
    There absolutely is, it just doesn't have a fancy overhyped name. Linux had support for creating loop devices for ages.

    Leave a comment:


  • Paradigm Shifter
    replied
    Originally posted by reza View Post
    skeevy420 and others: I'd like to know your setup for partitioning etc... You mentioned your root is EXT4 and you use ZFS for others. I'd appreciate if people can elaborate. Thanks!
    I'm a relative newcomer to ZFS (generally preferring more mature filesystems like ext4 or XFS) so am still finding my way, but on the systems I'm currently running ZFS (all two of them...) I use ext4 for /root and /home, ZFS with RAID-Z1 on three SATA SSDs for "fast" data, RAID-Z1 or Z2 across either 4 or 6 high capacity HDDs for "slow" data. It's been well behaved enough that I will probably expand ZFS to a few other systems as time permits, and get more adventurous with what I do. It was incredible easy to set up and so far has coped with the one time I managed to run out of RAM (on an 1.5TB system... oops...) gracefully (read: I didn't lose any data).

    Leave a comment:


  • R41N3R
    replied
    Imagine there would be a btrfs news without someone telling you about zfs ;-)

    Leave a comment:

Working...
X