Announcement

Collapse
No announcement yet.

ZFS On Linux Lands TRIM Support Ahead Of ZOL 0.8

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Chugworth
    replied
    Originally posted by kobblestown View Post
    I am sorry, but if you think fragmentation is not a problem with ZFS then you have no clue. Every write operation causes fragmentation in a COW filesystem and the tendency is for them to become heavily fragmented. Now, that may not be too much of a problem with SSDs but it is still at least a bit of a problem because SSDs also perform better with bigger sequential transactions, not random 4k IO. I consider the online defragmentation in Btrfs a major feature and it's too bad it's missing in ZFS*.
    I don't think a defragmentation tool is much of a solution for a COW filesystem. If you have several snapshots of a file, each with different modifications, then there's no way possible to store the data for that file sequentially. The very nature of a COW filesystem means that every change you make is going to be fragmented. I know in Btrfs you can set a file as NOCOW, but as soon as you take a snapshot of that file then flag is rendered useless. I think ZFS took the right approach with this. Trying keep a COW filesystem defragmented would be a fool's errand. You have to work with the expectation that everything is fragmented, and figure out ways to speed up access to the data. One of ZFS's answers to that is ARC, which is block-level caching. That really speeds up access to the data that's most frequently accessed (which is also the data that's likely to be the most fragmented). Of course, it does require a lot of memory to be effective. ZFS does have its advantages and disadvantages. I think it's wonderful for data storage, but it might not be the best choice for the OS partition of a desktop computer.

    Leave a comment:


  • itoffshore
    replied
    Originally posted by Rallos Zek View Post

    LOL not even! ZFS doesnt even have a fsck, defrag or encryption.
    I've been running a natively encrypted ZFS mirror for data on linux-hardened since 0.8 rc1 with not a single issue. If you don't need deduplication you may get better performance using
    Code:
    -o encryption=aes-256-gcm
    It seems sensible to run the actual system on luks encrypted btrfs.

    Leave a comment:


  • jrch2k8
    replied
    Originally posted by kobblestown View Post
    I am sorry, but if you think fragmentation is not a problem with ZFS then you have no clue. Every write operation causes fragmentation in a COW filesystem and the tendency is for them to become heavily fragmented. Now, that may not be too much of a problem with SSDs but it is still at least a bit of a problem because SSDs also perform better with bigger sequential transactions, not random 4k IO. I consider the online defragmentation in Btrfs a major feature and it's too bad it's missing in ZFS*.
    I Agree with you 50%, it is a fact that CoW file system do in fact fragment data and a case can be made that they actually do so in a bigger scale than your regular table/journal based filesystems.(this is the 50% we are in agreement)

    BUT fragmentation is not an actual problem for ZFS(note i use ZFS since the shinny Solaris 10 days) because ZFS allocation algorithms assume as a fact that data is always fragmented as well as metadata as well as checksum(this is where BTRFS is inferior to ZFS in my eyes and do need defrag) even if the underlying disk is actually completely sequential in practice.

    Why because ZFS was never designed to run on single disks but on multiple disks on a myriad of buses and even if you get identical hard drives is extremely hard to have sequential allocation because in practice all drivers have different errors on blocks(aka in practice not 2 drives are 100% equal).

    Also ZFS is extremely efficient reusing blocks instead of just keep filling empty blocks and is also really efficient locating data on parallel on RAIDs(if your data block a and b are too far away on drive 0, ZFS will simply get block a from disk 0 while getting block b from drive N) also you have several tiers of caches that can emulate sequential ism on hot data, dedup, compression.

    Also remember ZFS can switch block and dnode sizes on the FLY for pools/volumes/snapshots independently which would make any non-fragmentation algorithm aware go haywire and make your I/O crawl and beg forgiveness

    Sure i don't deny that in some niche cases fragmentation could be quantified on ZFS but for most cases in practice fragmentation have no real effect on ZFS that at the very least i have noticed through the years.

    Caveat: 1 case i could be wrong and be in fact affected is single disk pools with copies=1 since there is no alternative drives/copies to do parallel anything and depending how old your drive is, it could be nasty.

    Leave a comment:


  • skeevy420
    replied
    Originally posted by ypnos View Post

    The person I quoted was trying to smear systemd with this "issue". The package you are refering to in your quote needs the user/admin to explicitely enable the functionality (like all arch packages). If other distros like Ubuntu ship this and enable it by default, it is an issue of Ubuntu, and Ubuntu alone, and not Systemd. This is what I tried to communicate.

    Me personally, I enabled the fstrim timer on my system and it never interferred with my system's performance.
    Trust me, I know that. My first reply in this argument was "don't blame systemd for something that would be done with any init system".

    You know, I can't say that I'd blame Ubuntu for having a hardcore setup by default if that actually is the case here. I'd rather have a noob level distro enable safeguards by default versus relying on the end user to even know about said safeguards to enable them.

    Leave a comment:


  • ypnos
    replied
    Originally posted by skeevy420 View Post

    systemd doesn't, but some packages do. Have that setup wrong and I can see it being an issue.
    The person I quoted was trying to smear systemd with this "issue". The package you are refering to in your quote needs the user/admin to explicitely enable the functionality (like all arch packages). If other distros like Ubuntu ship this and enable it by default, it is an issue of Ubuntu, and Ubuntu alone, and not Systemd. This is what I tried to communicate.

    Me personally, I enabled the fstrim timer on my system and it never interferred with my system's performance.

    Leave a comment:


  • skeevy420
    replied
    Originally posted by ypnos View Post

    No, it isn't. Systemd does not come with an FSTRIM service file. Also, what eydee wrote is pretty much bullshit.
    systemd doesn't, but some packages do. Have that setup wrong and I can see it being an issue.
    Last edited by skeevy420; 01 April 2019, 08:43 AM. Reason: quote fail

    Leave a comment:


  • ypnos
    replied
    Originally posted by Raka555 View Post

    Systemd's claim to fame was that it will make bootup faster ...
    No, it wasn't. It was always considered a nice-to-have byproduct.

    Originally posted by Raka555 View Post
    Now doing another thing it should not be doing.
    No, it isn't. Systemd does not come with an FSTRIM service file. Also, what eydee wrote is pretty much bullshit.
    Last edited by ypnos; 01 April 2019, 09:01 AM. Reason: quote fail

    Leave a comment:


  • Vistaus
    replied
    Originally posted by starshipeleven View Post
    Yeah you heard that right. https://www.phoronix.com/scan.php?pa...D-ZFS-On-Linux
    They are moving to become a part of the ZoL codebase because apparently some of the big names behind ZFS on Illumos (aka the "common Unix thing", which is supposedly an upstream for ZoL too but in practice it isn't that much) have announced they are migrating to Linux and ZoL for their own products.
    LOL, you took my post too seriously. I was referring to the name (ZoL = ZFS on Linux, while I suggested ZoB = ZFS on BSD ).

    Leave a comment:


  • kobblestown
    replied
    Originally posted by hreindl View Post
    defrag is 1980s tech
    encryption belongs to the LUKS layer
    zfs has checksumming

    you have no clue
    I am sorry, but if you think fragmentation is not a problem with ZFS then you have no clue. Every write operation causes fragmentation in a COW filesystem and the tendency is for them to become heavily fragmented. Now, that may not be too much of a problem with SSDs but it is still at least a bit of a problem because SSDs also perform better with bigger sequential transactions, not random 4k IO. I consider the online defragmentation in Btrfs a major feature and it's too bad it's missing in ZFS*.

    In any case, I strongly disagree with the <feature> belongs to the <whatever> level. Btrfs and ZFS are the actual proof to the contrary - there is a lot to gain if you merge the levels judiciously. And I'd mush prefer native encryption rather than delegating it to a lower level. At the very least, I will save the CPU time for encrypting redundancy data.

    Having said that, I am a huge fan of ZFS because I cannot trust the higher RAID levels in Btrfs. Also the SSD caching in ZFS (SLOG and L2ARC) is better than any generic lower-level caching (adding to the previous point). But it's OK to acknowledge the shortcomings of a system. Much better than to be blind about them.

    * I read somewhere that defragmentation and filesystem shrinking were considered outside of the scope of an enterprise filesystem because if you value your data you'll have a backup and you can always recreate your filesystem with the size and disk configuration that you want and in an unfragmented state. I don't buy this kind of reasoning - Btrfs can do both and I've used both. It's amazing what Btrfs can do. And it has saved my data at least once. It's just not as good as ZFS for my current use case.

    Leave a comment:


  • horizonbrave
    replied
    Originally posted by eydee View Post
    [...] so welcome to the party of sitting 10 minutes at a black screen just to get a chance to log in [...]
    ​​​​​​Wow! I truly envy you.
    I've never been able to sit in front of a blank screen for 10 minutes.. I'm average/desktop user (no programming) and if 2 minutes has passed and the computer doesn't boot I usually think ''ok something funny software wise must have happened" or "there must be too dust on the graphic card it must be a random error (it's a 2013 Dell laptop I bought on eBay) and do an hard shutdown.
    I wonder why distro didn't find a way to let the user choose when trimming is due or at least communicate to him/her: listen mate at the next boot I'm going to take a bit longer because of this and that.
    Last edited by horizonbrave; 30 March 2019, 08:58 PM.

    Leave a comment:

Working...
X