Announcement

Collapse
No announcement yet.

TFS File-System Still Aiming To Compete With ZFS, Written In Rust

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by jacob View Post

    Yes, Multics for example was in PL/1 etc. None of these are what I would call mainstream OSes, though. Windows uses C++ in large parts of its userland but the NT kernel is AFAIK pure C (plus assembly, obviously).
    VMS was originally written in a language called BLISS ( http://www.cs.tufts.edu/~nr/cs257/ar...nder/bliss.pdf ) which was a mainstream OS for its time ( https://en.wikipedia.org/wiki/VAX ). Now if we define mainstream to mean the PC market... well then I suppose you're right :P

    Comment


    • #22
      Originally posted by Luke_Wolf View Post

      VMS was originally written in a language called BLISS ( http://www.cs.tufts.edu/~nr/cs257/ar...nder/bliss.pdf ) which was a mainstream OS for its time ( https://en.wikipedia.org/wiki/VAX ). Now if we define mainstream to mean the PC market... well then I suppose you're right :P
      Interesting, for some reason I always thought that VMS was also written in C. But now that we mention it, there was also AmigaOS, which was based on TripOS and was originally written in B

      Comment


      • #23
        I did not imagine how much bullshit I could read about ZFS and programming in general by Rust apologists. Surprises me every time.

        Comment


        • #24
          Ext4 is the best file system, it's stable and does its job perfectly. Next-gen filesystems are too complicated, try to do the job of other layers (like RAID), and as a result are unstable. And stability is the main goal and requirement for the filesystem, without it they're useless.

          Comment


          • #25
            Originally posted by jacob View Post

            Interesting, for some reason I always thought that VMS was also written in C. But now that we mention it, there was also AmigaOS, which was based on TripOS and was originally written in B
            I love learning about these little niggly bits of computing history, just so much greater context that everyone misses out on when they just buy into the standard Apple-Microsoft-Linux exclusive view of history that frankly that is just plain wrong. Like just to target this most recent poster LEW21, ext4 is interestingly enough an incompatible reimplementation of the UFS filesystem, that all of the Unixes used and had incompatible versions of between each other. Although more pedantically ext is based off of the Minix File System which is a simplified version of UFS, effectively re-extending it back out, however an interesting question I've yet to find the answer to is why this was done rather than just importing BSD's UFS and using that to make their own incompatible fork just like everyone else. Seems kind of odd to me.

            Comment


            • #26
              Am I the only one wondering how does one write a file system in a programming language? You can use a programming language to write drivers for a file system, but the file system is just a way of organizing data on a disk.

              Comment


              • #27
                Originally posted by Nille View Post
                I hope they don't make the same decision like the ZFS people that i cant add more drives to an array. (e.g. 3 disks in an raid5. if i want to add more disks i have to create another array or rebuild everything.
                This is not correct. An ZFS zpool is build up from one or several vdevs. Each vdev is a group of disks, or files, etc and corresponds to raid-5 or raid-6, etc. So if you have say, 4 disks in your zpool configured as raid-5, then you can add another raid-5 into your zpool. So now you have two raid-5 groups in your storage zpool with 8 disks in total. In fact, also large hardware raids are built up like this; you have several small raid-6 groups together. You want to avoid having one large raid-6 with 50 disks, instead you should have 5 raid-6 groups each consisting of 10 disks. This is also how ZFS works.


                Originally posted by starshipeleven View Post
                ZFS was developed under time pressure (relatively speaking) so they had to take some shortcuts like assuming plentiful ECC RAM and other stuff like that.

                The reason was that they wanted to get it in production in a reasonable timescale, while btrfs whose goal is "doing it right" is taking a long while to get there.
                Time pressure of ZFS is not correct. Where did you read that? There are no links nowhere about time pressure forcing ZFS devs to take shortcuts. I have followed ZFS before it was released and I know lot about it, and I know that no ZFS dev talked about time pressure forcing them to take short cuts. That is bogus and false information. On wikipedia it says that ZFS development started 2001, and first code was incorporated into Solaris in 2005.

                If you would read the ZFS history as told by the devs, you would have known that the ZFS developers wanted to be future proof when they did their design decisions. As they aimed for large servers, they thought about RAM usage, but ultimately settled down on "when we ship this, all servers will have at least 1 GB of RAM". All Sun servers also use ECC RAM. And as they did not have to optimize for very small RAM they could produce much cleaner and stable code, they said.

                Also, the devs thought about using 64 bit or 128 bit filesystem. But as they said; today (year 2001) we max out at something like 1 Petabyte raids. And 64 bit filesystems maxes out at 32(?) Petabyte which is just a couple of doublings. Say we double our storage capacity every second year (Moores law) which means that in another 16 years or so we will have 32 Petabyte raids, which means that 64 bit filesystems is not enough. 128 bit filesystems means that no one can ever fill up the zpool. So "128 bit filesystem is enough for everybody". In fact, if you want to fill up one 128 bit zpool, that is impossible. Because if you use 30 TB hard disks, then you need more hard disks than the entire earth weighs, something like 2-3 earths or so. You need to move so many electrons to store the data in 128 filessystem, that humankind has never produced that much energy. (I dont remember the exact numbers, but the correct caluclations are something like this, maybe it is only 2.5 earths?)

                BTRFS is 64 bit, the BTRFS devs did not think about this as well as the ZFS devs, when they copied ZFS. They just copied ZFS without understanding why ZFS had strange design decisions. For instance, ZFS is monolithic, which means ZFS can repair all data with checksums (this is impossible if you have a layered approach; filesystem talking to raid layer talking to volume manager talking to...) so Linux developers mocked ZFS and called it "rampant layering violation". Linux kernel devs never understood why ZFS is one monolithic layer, and they thought ZFS was bad design:
                https://arstechnica.com/staff/2007/0...ring-syndrome/
                The funny thing is, Linux devs thinks that ZFS is badly designed - but still BTRFS is also monolithic. Why is BTRFS monolthic when Linux devs had the chance to design their ZFS copy correctly? Answer: they did not know why ZFS is designed at is (monolithic, 128 bit, ECC, etc), and just blindly copied it. Now BTRFS has problems as they did not understand ZFS designs. In the future when everybody has 32 Petabyte raids, BTRFS will need to be redesigned again. Changing 64 bit to 128 is a major undertaking, you can not just recompile BTRFS with "128 bit", no, you need to change all data structures which are laid out to be 64 bits. In effect, you change the entire filesystem. BTRFS devs does not understands ZFS.

                Read some more on ZFS history by the main architects, and you will understand why ZFS looks the way it does, and why BTRFS devs has not understand anything as BTRFS has major limitations. And that is why history shows us that BTRFS has lot of problems; it is not stable and it is slow and does not scale to many petabytes. IBM Sequioa supercomputer has a 30ish(?) petabyte ZFS storage solution with their Lustre. Or is it 50ish Petabyte? BTRFS could never handle that. When you copy something advanced without understanding anything, you will get problems. BTRFS has problems today after 10 years or so, of development. Stop copying others and do it right from the beginning.

                And again, please stop spreading false information about ZFS (as you have done in the past) when you say that ZFS requires large amount of RAM. I have told you several times that ZFS runs on 1GB RAM servers and even less. I have also posted links to you, where people run ZFS on 512 MB computers. Maybe this time you will accept that your understanding of ZFS is wrong? In fact, in some thread discussion someone compiled ZFS to a 32 MB ram computer and ran it fine.

                ZFS on raspberry pi, 512 MB RAM:
                https://news.ycombinator.com/item?id=11077752

                Comment


                • #28
                  Originally posted by bug77 View Post
                  Am I the only one wondering how does one write a file system in a programming language? You can use a programming language to write drivers for a file system, but the file system is just a way of organizing data on a disk.
                  And a linked list is just a way of organizing data in memory?

                  Are you seriously trying to argue that unambiguously describing how data is organized in any given data structure and how it is transferred to and from its underlying storage media (including bookkeeping etc.) is not done via code?

                  Or am I just misunderstanding you? If I am, I'm actually quite curious about what you really mean so feel free to enlighten me.

                  Comment


                  • #29
                    Originally posted by ermo View Post

                    And a linked list is just a way of organizing data in memory?

                    Are you seriously trying to argue that unambiguously describing how data is organized in any given data structure and how it is transferred to and from its underlying storage media (including bookkeeping etc.) is not done via code?

                    Or am I just misunderstanding you? If I am, I'm actually quite curious about what you really mean so feel free to enlighten me.
                    Well no, it's not done via code. It's via a specification. Implementation of that specification is where the code steps in.
                    I.e. I could describe a linked list to a student without ever showing a single line of code.

                    Comment


                    • #30
                      You'll *only* need more RAM in ZFS if you want high performance/more features. There's a lot of really cool features that you can take advantage of with ZFS when you have tons of RAM like compression and cache optimizations.

                      Comment

                      Working...
                      X