ZFS is old and that is a feature. No company will use a filesystem in production that is new. A hip language will not persuade any admins to risk company data. ZFS, XFS even ext4 are all well aged and solid. This is almost the only argument when it comes to filesystems. Everything else is more or less decoration. Although the ZFS toolset is unmatched.
Announcement
Collapse
No announcement yet.
TFS File-System Still Aiming To Compete With ZFS, Written In Rust
Collapse
X
-
I just wanted to chime in to back up kebabbert and say that "ZFS needs loads of RAM" is a myth, or as us Brits like to say, utter bollocks.
IME you can use all of ZFS' features comfortably with 1GB RAM EXCEPT deduplication, which does require silly amounts of RAM and, in most cases, just isn't worth it anyway. 512MB RAM would be possible too but I would'nt want to try using such a machine for any real work, not even personal use.
I'm looking forward to trying TFS when its ready but it won't be replacing ZFS on my NAS in the next decade.
Comment
-
Originally posted by Nille View PostI hope they don't make the same decision like the ZFS people that i cant add more drives to an array. (e.g. 3 disks in an raid5. if i want to add more disks i have to create another array or rebuild everything.
The difference is in the guarantees it makes. RAIDZ is always updated as a contiguous unit. The data and the parity are written as a single transaction and are not individually modified. So a 128kb write is split between the data drives and then the parity is written. With RAID5 you would write N different 128kb chunks to each data disk, then calculate the overall parity. RAID5 allows you to update just 1 of the N blocks and then recalculate the parity, introducing the chance that you'll update the block but not have a chance to update the parity before the power goes out. This is the "RAID5 Write Hole", and ZFS solves that, but at a cost.
ZFS also has a fixed layout for the blocks. In the metadata is just says 'this data block is stored at offset ####### of this vdev', and that points to the place on the RAIDZ volume. If you were to add another disk, it would make all of those offsets incorrect. And you couldn't possibly rewrite every data block on the entire system as a single transaction to consistently update everything.
It was a conscious choice during the design of ZFS, not an oversight. If you are building a storage server, you only have X drive slots in the chassis. You know this ahead of time and plan accordingly. You can expand the number of slots via a JBOD or Disk Shelf, but that makes it easy to add another array.
If for your home server you want the flexibility of just adding a small number of drives at a time, use mirrors. The total storage is a bit less, but you get the flexibility you are looking for, and you also get better IOPS. With the forthcoming 'device evacuation' feature, you'll also be able to remove mirror devices.
- Likes 1
Comment
-
Originally posted by profoundWHALE View PostYou'll *only* need more RAM in ZFS if you want high performance/more features. There's a lot of really cool features that you can take advantage of with ZFS when you have tons of RAM like compression and cache optimizations.
- Likes 2
Comment
-
Originally posted by starshipeleven View PostZFS was developed under time pressure (relatively speaking) so they had to take some shortcuts like assuming plentiful ECC RAM and other stuff like that.
The reason was that they wanted to get it in production in a reasonable timescale, while btrfs whose goal is "doing it right" is taking a long while to get there.
ZFS was designed for enterprise, and specifically to scale for the future. There is no requirement for ECC RAM, but it is assumed that serious storage machines will be built with ECC RAM if they want to have long uptimes yes.
The ZFS design philosophy is simple: Design a durable, reliable file system, where you can solve any performance problem by throwing enough money at it.
The file system is safe first, performance by damned, then you can solve the performance problem if you actually have it.
Losing data is always a problem.
- Likes 2
Comment
-
I would be much more interested in TFS if it implemented ZFS in an on-disk compatible way. So it could import my existing zpool with a RUST codebase. As for a different ZFS-like file system, I am not hopeful that any of them will ever compare to ZFS.
When ZFS went open source, it already contained over 100 engineer years of effort, and its development has continued since then. ZFS is very active and there are major new features in the pipeline sponsored by an interesting mix of open source projects like IllumOS, Linux, FreeBSD, and OpenSFS, companies like Delphix, Nexenta, Datto, and even Intel (who is contracted to build a new super computer based on ZFS), plus government agencies like LLNL (US) and FAIR (EU).
The advantage that ZFS had in the early days was the QA team at Sun. They ensured that the hard part of development, the testing, actually got done, because they got paid to do it.
With a head start of over 100 engineer years, I just don't see how btrfs can ever catch up. Even if Oracle put 100 engineers on it full time, it would take 3 years to catch up to that 100 engineer year figure. And Oracle is just not putting in that level of effort. btrfs was just an attempt to have an answer for ZFS. That answer was wrong.
- Likes 1
Comment
-
Originally posted by danboid View PostI'm looking forward to trying TFS when its ready but it won't be replacing ZFS on my NAS in the next decade.
Comment
-
Originally posted by mmstick View Post
TFS should be ready by the end of summer. That's the goal. It should take much less effort to get TFS production-ready compared to other filesystems in existence though. At most, maybe a year after the first stable release before it's 100% solid and fully verified.
But maybe some company sees the potential in TFS and hires the author.
Comment
-
Originally posted by mmstick View Post
TFS should be ready by the end of summer. That's the goal. It should take much less effort to get TFS production-ready compared to other filesystems in existence though. At most, maybe a year after the first stable release before it's 100% solid and fully verified.
In my experience, getting things working is the straight forward part. Proper error handling ends up as 2/3rds of the code. If you're doing it properly anyway. File systems have to handle data transfers that end half-way through, disks that run out of space, disks that lie about data commit, data that isn't there. Data that is sometimes there. RAM errors, those are fun. RAID recovery with errors on multiple disks. . .
Being able to mount in degraded mode without making things worse.
File systems. So much fun.
- Likes 2
Comment
-
Originally posted by Zan Lynx View Post
I'll believe it when I see it.
In my experience, getting things working is the straight forward part. Proper error handling ends up as 2/3rds of the code. If you're doing it properly anyway. File systems have to handle data transfers that end half-way through, disks that run out of space, disks that lie about data commit, data that isn't there. Data that is sometimes there. RAM errors, those are fun. RAID recovery with errors on multiple disks. . .
Being able to mount in degraded mode without making things worse.
File systems. So much fun.
On the other hand, if Rust makes writing software for a whole file system that easy, it would be a huge win for Rust.
Comment
Comment