No announcement yet.

ZFS On Linux Lands TRIM Support Ahead Of ZOL 0.8

  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by jrch2k8 View Post

    WoW, you are so sapient on file systems i should hire you like right now, damn google should freaking hire you like yesterday. WoW

    1.) ZFS don't and will never need something as crude as FSCK because is COW + self healing(aka it fsck itself), the closest thing would be resilvering(mostly for Physcal DISK replacement)

    2.) ZFS don't require defrag at all since is COW mate, hence it never slows it down plus unlike toy FS, ZFS does DEDUP and all sort of RAID combos that can even make this even more null

    3.) AHH??? doesn't WHAT???? AHH???, ZFS encrypts boy and it encrypt hard:

    -- wanna encrypt whole pool done with a shinny RAID 60 on 20 drives with M.2 caches, done this is not even an issue
    -- wanna encrypt only a volume or set of volumes in that pool? done
    -- wanna encrypt only certain handpicked snapshot of that volume or set volumes? sure, are you even trying to make this hard?
    -- wanna encrypt a deduped and compressed large DNODE set of snapshots of those volumes BUT in a pool that use ISCSI hardrives from a JBOD? easy as cake boy
    -- wanna use those for booting the OS as well with systemd-boot? lol, is like you are not even trying

    3. CAVEAT: ) for NATIVE ACCELERATED ENCRYPTION you need 0.8 release and a recent enough kernel.

    General CAVEAT: ZoL is not Illumos ZFS, since that codebase basically was killed by Oracle hence why FreeBSD is moving to ZoL now
    Wait a sec, FreeBSD is moving to ZoL? Shouldn't they be moving to ZoB?


    • #22
      Originally posted by g3wcm2V8uqwR View Post
      Given that it only took 7.5 years to implement TRIM, I'm kinda giddy to start using ZFS now
      Well, honestly i have been using SSDs with ZFS for a long time and due the way ZFS handle blocks i've never noticed any slowdowns or shorter life cycles like you do with regular FS, so it took that long basically because is not a huge deal like it could be with other FS.

      I tried the git build on my SSD RAID 1 boot pool already and honestly there is no performance variation at all or any other perceivable change on the pool even the tool ran manually, so i guess this mostly use hardware acceleration for those block instead of just check and reuse like before.

      i'll try later with a client RAIDZ2 system, maybe that system have I/O pressure enough to show a difference after a manual trim


      • #23
        i use ZFS in Proxmox and Ubuntu 18.04 LTS
        and it runs realy great


        • #24
          Well, time to buy a ton of SSDs.


          • #25
            phoronix Is there a reason my last post was unapproved?


            • #26
              Originally posted by Vistaus View Post

              Wait a sec, FreeBSD is moving to ZoL? Shouldn't they be moving to ZoB?
              Yeah you heard that right.
              They are moving to become a part of the ZoL codebase because apparently some of the big names behind ZFS on Illumos (aka the "common Unix thing", which is supposedly an upstream for ZoL too but in practice it isn't that much) have announced they are migrating to Linux and ZoL for their own products.


              • #27
                Originally posted by skeevy420 View Post
                phoronix Is there a reason my last post was unapproved?
                it's random.

                vBullettin's antispam triggered on something you posted.


                • #28
                  Originally posted by Vistaus View Post

                  Very true. But I think he meant ZFS On Linux. AFAIK, that's not being used much in enterprise, if ever.
                  There are commercial products offering that though, like Proxmox


                  • #29
                    Originally posted by eydee View Post
                    [...] so welcome to the party of sitting 10 minutes at a black screen just to get a chance to log in [...]
                    ​​​​​​Wow! I truly envy you.
                    I've never been able to sit in front of a blank screen for 10 minutes.. I'm average/desktop user (no programming) and if 2 minutes has passed and the computer doesn't boot I usually think ''ok something funny software wise must have happened" or "there must be too dust on the graphic card it must be a random error (it's a 2013 Dell laptop I bought on eBay) and do an hard shutdown.
                    I wonder why distro didn't find a way to let the user choose when trimming is due or at least communicate to him/her: listen mate at the next boot I'm going to take a bit longer because of this and that.
                    Senior Member
                    Last edited by horizonbrave; 30 March 2019, 08:58 PM.


                    • #30
                      Originally posted by hreindl View Post
                      defrag is 1980s tech
                      encryption belongs to the LUKS layer
                      zfs has checksumming

                      you have no clue
                      I am sorry, but if you think fragmentation is not a problem with ZFS then you have no clue. Every write operation causes fragmentation in a COW filesystem and the tendency is for them to become heavily fragmented. Now, that may not be too much of a problem with SSDs but it is still at least a bit of a problem because SSDs also perform better with bigger sequential transactions, not random 4k IO. I consider the online defragmentation in Btrfs a major feature and it's too bad it's missing in ZFS*.

                      In any case, I strongly disagree with the <feature> belongs to the <whatever> level. Btrfs and ZFS are the actual proof to the contrary - there is a lot to gain if you merge the levels judiciously. And I'd mush prefer native encryption rather than delegating it to a lower level. At the very least, I will save the CPU time for encrypting redundancy data.

                      Having said that, I am a huge fan of ZFS because I cannot trust the higher RAID levels in Btrfs. Also the SSD caching in ZFS (SLOG and L2ARC) is better than any generic lower-level caching (adding to the previous point). But it's OK to acknowledge the shortcomings of a system. Much better than to be blind about them.

                      * I read somewhere that defragmentation and filesystem shrinking were considered outside of the scope of an enterprise filesystem because if you value your data you'll have a backup and you can always recreate your filesystem with the size and disk configuration that you want and in an unfragmented state. I don't buy this kind of reasoning - Btrfs can do both and I've used both. It's amazing what Btrfs can do. And it has saved my data at least once. It's just not as good as ZFS for my current use case.