Announcement

Collapse
No announcement yet.

A New Linux File-System Aims For Speed While Having ZFS/Btrfs-Like Features

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • jacob
    replied
    Originally posted by dibal View Post
    I like to see a storage system where you set attributes (fast, super fast, normal, redundant, lazy, super redundant, etc.) on a file or directory. And if these beast needs physical storage should drop me a message. Also it should gives some power and noise controlling options by shutting down physical media.

    Just dreaming.......
    Not dreaming. It already works that way in btrfs: cow/noncow, compression algorithm etc. per file or directory. I've read that they even plan per-file raid settings but I don't know if/when that will be supported.

    Leave a comment:


  • dibal
    replied
    I like to see a storage system where you set attributes (fast, super fast, normal, redundant, lazy, super redundant, etc.) on a file or directory. And if these beast needs physical storage should drop me a message. Also it should gives some power and noise controlling options by shutting down physical media.

    Just dreaming.......

    Leave a comment:


  • SystemCrasher
    replied
    Zlib transparent compression
    If they are serious about SPEED, they should ditch this zlib crap ASAP and use LZO or LZ4 instead. Zlib is about everything but speed. OTOH LZ4 or LZO can even exceed drive read or write performance, making things actually FASTER than drive would otherwise do.

    Leave a comment:


  • jacob
    replied
    Originally posted by kebabbert View Post
    "...match ext4 and xfs on performance and reliability, but with the features of btrfs/zfs..."

    This is retarded. The only point of using ZFS is it's reliability, nothing else comes close. There are research on ZFS showing that it is the most reliable data protecting filesystem out there today. Everything else, snapshots, scalability to Petabyte raids, performance, etc are just not important. The main point of a filesystem is to be reliable. If it can not protect your data against bit rot and other forms of data corruption, does it matter if it is very fast? What do you choose, a fast and unreliable filesystem or a slow but a reliable filesystem? I dont care how fast a filesystem is, I want my data protected. There are research on ext4 and xfs showing they are unreliable, and there are research showing that ZFS is reliable. The author got it backwards, ZFS is the only proven reliable filesystem by researchers, ext4 and xfs are not:
    https://en.wikipedia.org/wiki/ZFS#Data_integrity
    That's nonsense. Designing an uber-reliable filesystem is not any harder than designing a super-fast filesystem. The challenge is to design a filesystem that is reliable AND fast. Obviously that always requires a delicate compromise, but such is the Holy Grail. XFS, ext4, UFS & co have served us very well for years so dismissing them is silly, but yes, now that storing terabytes of data has became common, we need something more reliable. The answer is obviously *not* ZFS, because in may scenarios its performance is indeed far less than acceptable. As for btrfs, I won't risk any opinion, but I'm glad that there are new developments in this area.

    PS: I guess the article was just poorly worded, the author presumably didn't suggest that ext4 or xfs are more reliable by design than zfs or btrfs (they are not) but simply that the new filesystem would be as ubiquitous and well tried as ext4 or xfs, so we could expect it to be just as free from bugs.

    Leave a comment:


  • zamadatix
    replied
    Originally posted by kebabbert View Post
    "...match ext4 and xfs on performance and reliability, but with the features of btrfs/zfs..."

    This is retarded. The only point of using ZFS is it's reliability, nothing else comes close. There are research on ZFS showing that it is the most reliable data protecting filesystem out there today. Everything else, snapshots, scalability to Petabyte raids, performance, etc are just not important. The main point of a filesystem is to be reliable. If it can not protect your data against bit rot and other forms of data corruption, does it matter if it is very fast? What do you choose, a fast and unreliable filesystem or a slow but a reliable filesystem? I dont care how fast a filesystem is, I want my data protected. There are research on ext4 and xfs showing they are unreliable, and there are research showing that ZFS is reliable. The author got it backwards, ZFS is the only proven reliable filesystem by researchers, ext4 and xfs are not:
    https://en.wikipedia.org/wiki/ZFS#Data_integrity

    You can't compare 2 things if you only know 1. Also I'm not sure why you don't consider reliability a feature but it would help with your confusion if you did. BcacheFS was just announced and is still in an early testing level release phase so of course you're not going to find it in much filesystem reliability research if any at all but if you actually read about it before you complained the author is wrong you'll find BcacheFS either already does or plans on supporting all of those things you linked to why ZFS is so reliable. It's goal is just to do it faster.

    Leave a comment:


  • stiiixy
    replied
    Originally posted by kebabbert View Post
    "...match ext4 and xfs on performance and reliability, but with the features of btrfs/zfs..."

    This is retarded. The only point of using ZFS is it's reliability, nothing else comes close. There are research on ZFS showing that it ... on ext4 and xfs showing they are unreliable, and there are research showing that ZFS is reliable. The author got it backwards, ZFS is the only proven reliable filesystem by researchers, ext4 and xfs are not:
    https://en.wikipedia.org/wiki/ZFS#Data_integrity
    Horses for courses, mate. I have several scenario's where data loss is acceptable, and I choose my filesystems accordingly. For example on my main machine where I pound the crap out of every resource simultaneously, I want a quick response and delivery of data; My working data is backed up and I can afford the downtime should something occur. Same goes with gaming; I want the FASTEST POSSIBLE READ SPEEDS; I could care less if a file gets corrupted. I can simply copy from the archives which funnily enough are backed up on servers on other filesystems designed for the archival/storage task.

    This Bcachefs seems to provide a bunch of tools naturally through the progression of an actively developed tool which funnily enough turned out to be pretty much a FS anyway (the point of this entire article, which I think you missed). If minimal resources go in to making a new FS that can progress alongside its parent tool, ie Bcache, and if the new FS show's promise, why would you not consider using it? Theoretically, you got bin a lot of old FS' not up to the task that the new one is, and put dev's to better use elsewhere. Should they want, of course.

    It's an interesting concept, not just for technical merits, but from an organisational and common sense (which is all to uncommon!) perspective as well. Rather than forking, or NIHS, this team of dev's are co-developing two tools and it's not costing them two projects worth of resources.

    Leave a comment:


  • kebabbert
    replied
    "...match ext4 and xfs on performance and reliability, but with the features of btrfs/zfs..."

    This is retarded. The only point of using ZFS is it's reliability, nothing else comes close. There are research on ZFS showing that it is the most reliable data protecting filesystem out there today. Everything else, snapshots, scalability to Petabyte raids, performance, etc are just not important. The main point of a filesystem is to be reliable. If it can not protect your data against bit rot and other forms of data corruption, does it matter if it is very fast? What do you choose, a fast and unreliable filesystem or a slow but a reliable filesystem? I dont care how fast a filesystem is, I want my data protected. There are research on ext4 and xfs showing they are unreliable, and there are research showing that ZFS is reliable. The author got it backwards, ZFS is the only proven reliable filesystem by researchers, ext4 and xfs are not:

    Leave a comment:


  • profoundWHALE
    replied
    The difference in this case is that even if no one uses this for a normal filesystem, it still wouldn't have lost its purpose.

    Leave a comment:


  • waxhead
    replied
    I actually like that there is yet another zfs/btrfs style filesystem out there. For me the most important thing is reliability, robustness and redundancy. I hope bcachefs will consider adding the 6x parity stuff posted to btrfs a few years ago from the snapraid author. I think some competition from another filesystem can be good for btrfs too.

    Leave a comment:


  • TumultuousUnicorn
    replied
    Aims for speed but very slow...
    F2FS aims for speed on SSD and is faster than other FS.

    Leave a comment:

Working...
X