Announcement

Collapse
No announcement yet.

Apple Designs New File-System To Succeed HFS+

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • name99
    replied
    Originally posted by peace View Post
    My theory is that apple wants better support for distributing/syncing data in their FS. They added in iOS 10 the ability to view files across many devices. Maybe they want this support in the FS level.

    distributing/syncing data across network connected devices isn't something supported in btrfs or ZFS. If I remember correctly.
    Exactly. This is just one example of the sort of thing they probably want to add.
    Right now they synchronize files through API (using things like NSFileCoordinator) because the UNIX file locking primitives or so utterly retarded and useless. I expect that, along with everything else they're doing (as described in my comment above) they're adding some sort of file/range locking facility at the OS level that actually WORKS, even if you do have to augment the standard UNIX system calls to access it.

    Leave a comment:


  • name99
    replied
    Originally posted by carewolf View Post
    Reinventing the wheel, again..
    Did you say that when IBM created JFS? When SGI created XFS? When Sun created ZFS? When Samsung created F2FS?

    Did it ever occur to you that the file system is the foundation of everything an OS does, and the ability to UNILATERALLY change the file system is thus extraordinarily important to an OS vendor? Consider how many changes Apple has introduced over the last 15 years that have more or less possible because they controlled the entire file stack:
    Adding journaling. Spotlight and general indexing. Various metadata, including security relevant metadata and ACLs. Time Machine. File versioning. Compressed files. Fusion. etc etc.

    Insisting that Apple use an "open" file system, and that that file system will absolutely support their needs for the next 25+ years, is to claim that no changes of similar magnitude can be expected, changes which will require at least some monkeying around with the file system.

    A file system is not just the disk layout (the part that is [mostly] immutable, though even that can have new features added as long as older OS code is blind to them, and JHFS+ has done that), it is also the implementation code. And that implementation code may occasionally need to change substantially (for features like Fusion). There's very little to Apple's advantage in tying themselves to either an existing disk layout OR to an existing source base that they cannot rapidly unilaterally change.

    Beyond all this, there's an element of mass stupidity to these supposedly technical complaints.
    What has been the biggest change in storage over the past ten years? Easy --- the use of SSDs. But SSDs can be used acceptably as fast hard drives, with hard-drive optimized file systems, because the gap in speed between RAM and flash is still so large.
    Now what's going to be the biggest change in storage over the NEXT ten years? Also easy --- the use of persistent RAM (technologies like Optane). But persistent RAM REQUIRES
    (a) new usage models that enforce storage ordering via CPU cache-control instructions (as opposed to how storage ordering was done with SSDs and HDs)
    (b) new models for both how the file system is laid out on "disk", and how it is accessed --- basically you want something that looks like, and with an API like, an in-memory database.

    Don't you think Apple is perfectly well aware of these issues? Hell, the newest version of the ARMv8 ISA has instructions added for precisely this purpose, and Apple is no doubt implementing them on its next CPU or two right now. So WTF would Apple switch to ZFS or ext4 or btrfs or some other file system optimized for storage circa 2000, or even F2FS or something optimized for storage circa 2010, when they have the chance to introduce something optimized for storage circa 2020?

    Leave a comment:


  • starshipeleven
    replied
    Originally posted by pal666 View Post
    brainfuck is also feature-complete, it does not make it usable.
    brainfuck is designed for research or sado/masochism purposes, if you are not using it for research or to deal pain to people you are using it wrong.
    Is a hammer usable as a screwdriver? No. You are using it wrong.

    f2fs has no raid at all, so obviously raid is more usable in btrfs than in f2fs. Plus assorted hiccups.
    is there something stopping me from using mdadm and LVM (or even hardware raid, for that matter) on f2fs?

    Since f2fs does not have checksumming in any shape or form, there is no logic reason it needs to implement RAID or volume management at the filesystem level, and it can live happy with letting the usual suspects deal with that.

    btrfs (and also ZFS for that matter) must implement RAID and volume management at the filesystem level because it is checksumming things, and letting mdadm and LVM work on it would screw up checksumming.

    Leave a comment:


  • pal666
    replied
    Originally posted by alien View Post
    btrfs is only usable if you have plenty of space to keep enough of it empty.
    still more usable than f2fs. for example i use in on 3 drives, f2fs's raid1 support is unusable

    Leave a comment:


  • pal666
    replied
    Originally posted by starshipeleven View Post
    btrfs has the free space issue (due to it being a CoW filesystem, they still have not even placed some arbitrary "reserved space limits" like for example there are on ext2-3-4 for other reasons) and the fact that it isn't working beyond RAID1 (both big issues). Plus assorted hiccups.

    f2fs is mostly feature-complete and relatively stable afaik.
    brainfuck is also feature-complete, it does not make it usable. f2fs has no raid at all, so obviously raid is more usable in btrfs than in f2fs. Plus assorted hiccups.

    Leave a comment:


  • pal666
    replied
    Originally posted by liam View Post
    I'd say that apple, more often than not, deliver well designed products that are pretty forward looking.
    i'd say that apple, more often than not, just copies features from android

    Leave a comment:


  • balperi
    replied
    good news

    Leave a comment:


  • starshipeleven
    replied
    Originally posted by liam View Post
    I'm not certain about this. You might be right, but I'm genuinely not sure.
    IMHO, the massive layer violating nature of btrfs means that it has to tailor itself to linux-isms, rather than the well defined (even if not unchanging) system internal interfaces.
    ZFS, aiui, was a very special case. It was written with a compatibility layer (http://open-zfs.org/wiki/Platform_co...0on.C2.A0Linux) for, at least, a large amount of its code. I THINK this is why zfs, on linux, doesn't have a reclaimable page cache.
    Well, in the page you linked they talked of the modifications they made to ZFS to make it portable. "They" is "the people of the open-zfs project", not the original devs nor Oracle.

    You can also see how they said their ZFS on OSX is mostly a copy-paste of the linux port with a bunch of minor wrappers and things, so what works on linux can be adapted relatively easily.

    Then again it's a wiki so it's probably 87% wrong and lacks 271% of information.

    A better example would be to look at the recent, partial, btrfs port to the window's environment. I haven't looked at that, at all, but how they managed that would be very demonstrative of the difficulties.
    If their readme is telling the truth, it's a complete rewrite https://github.com/maharmstone/btrfs So I don't think it will help much.

    I also agree with your assessment of our cloud-based future. That's largely why I think that btrfs is simply here too late. The future,imho, are these clustering filesystems (ceph, iirc, has even expressed scalability concerns with the vfs/io interface) which store/distribute/checksum our data in the cloud, and scale across datacenters.
    well, I think there is and will really remain quite a bit of grey area between single-disk consumer device <---> bigass SAN.

    Apart from usual server use where a SAN is beyond overkill, for example home NAS market has simply exploded in the last years, and btrfs there is VERY nice. Netgear uses it already in their NAS lines. Most commercial NAS devices offer similar easy setups as the usual cloud providers.

    I just read your post above. I'm surprised they only started this in 2014, but zfs, with its far larger feature set, took only a year or two longer.
    Let me please remind you that designing something that isn't doing something particularly new (there are implementations of all features it does already, in various different filesystems, also opensource) is easier than just dashing into the unknown.

    These devs had the luxury of doing "lesson-learned" design, seeing what other projects did and what went wrong. Not saying they copied, saying they learned from other's mistakes.

    Also as I said, making a filesystem that does not tackle RAID nor checksumming is very easymode.


    Leave a comment:


  • liam
    replied
    Originally posted by fuzz View Post

    I haven't done much reading on it, but btrfs appears to be the recommended filesystem for Ceph clusters. Don't they go hand-in-hand?
    Depends on the deployment.
    From what I recall CERN deploys ceph with xfs.
    The ceph devs rely, currently, on btrfs for certain features, but still suggest xfs.
    The cephfs, which presents a fs interface to the ceph store, still isn't production ready.

    Leave a comment:


  • fuzz
    replied
    Originally posted by liam View Post
    I also agree with your assessment of our cloud-based future. That's largely why I think that btrfs is simply here too late. The future,imho, are these clustering filesystems (ceph, iirc, has even expressed scalability concerns with the vfs/io interface) which store/distribute/checksum our data in the cloud, and scale across datacenters.
    I haven't done much reading on it, but btrfs appears to be the recommended filesystem for Ceph clusters. Don't they go hand-in-hand?

    Leave a comment:

Working...
X