Announcement

Collapse
No announcement yet.

Reiser4 & ZFS Get Updated For The Linux 4.4 Kernel

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by cjcox View Post
    While true, both are the workings of the genius madman of Hans Reiser. So you could say Reiser4 was an attempt to do things better.

    With that said, why is there so many "killer" code writers or Linux advocates? Sad. Life for them has no worth. Time to pray for Linux kernel devs, application devs and Linux leaders and aficionados.
    Although I doubt anyone has taken a census, Hans seems to have been an outlier. How many others are there? Are you including suicides and/or victims of homocide like Ian Murdock? His case is somewhat special in that no one is really sure which it is, which is remarkably similar to Schrödinger's cat being alive or dead, except with the question being the precise nature of the cause of death. The only thing that seems to be confirmed is that he was declared dead.

    Comment


    • #12
      Originally posted by cjcox View Post

      While true, both are the workings of the genius madman of Hans Reiser. So you could say Reiser4 was an attempt to do things better.

      With that said, why is there so many "killer" code writers or Linux advocates? Sad. Life for them has no worth. Time to pray for Linux kernel devs, application devs and Linux leaders and aficionados.
      I wouldn't call him a madman. In any case I think there are far less criminals among that lot than the general population, but perhaps we can fix that by introducing some diversity?

      Comment


      • #13
        It could be interesting to see ZFS vs Btrfs battle, as well as other things for reference & comparison.

        ryao from user's standpoint, mainlining is good becasue:
        1) Works out of box. What could be worse than filesystem not immediately accessible on booting from typical Linux live stick, etc?
        2) Properly integrated with rest of kernel subsystems. Even btrfs got some uproar on some thngs, but finally made it, due to the fact it does some things in really custom ways and it happens for good reason. But it is not a case for ZFS.
        3) There're plenty of capable people around of it. Btrfs got quite used and therer're quite some bugs reported and ironed out. Over time it becomes quite a good thing. Let's see if ZFS team can do better than that.

        Being mainline just is not desirable to any kernel filesystem driver project that
        ...that does not gives a fsck about widespread usage, usability and other silly crap. So it going to be second-class citizen, like Reiser4. I.e. those who really want it would use it, sure. But it would make a little sense for anyone else. And this speech about lack of support for older kernel versions means ZoL is also going to be nightmare in real-world production environments. Because... because when you run things in production, you do not want brand-new uber-mega kernel or something, etc. It do not have to fell apart in first place, and if it have to, it must happen in well-defined timeframe, when staff can handle it in adequate manner, without interrupting service, etc. That's how policy to stick to frozen versions and only fix absolutely necessary things appears: it causes less breakage. You see, I've seen couple of cases when new kernel version just refused to boot on some machines, not to mention smaller, "less critical" fallouts. This has been attributed to brand new breaking changes in new kernels. And shows us why backports can make sense in production. Yep, it suxx to spend time on old code which is going to trashbin. But if your critical machine fails to reboot after upgrade, it suxx even more. And what you've told basically reads as "ZFS is doomed to be troublesome in production". Thanks for information, it would really help me in planning quite some deployments.

        Comment


        • #14
          Originally posted by SystemCrasher View Post
          It could be interesting to see ZFS vs Btrfs battle, as well as other things for reference & comparison.

          ryao from user's standpoint, mainlining is good becasue:
          1) Works out of box. What could be worse than filesystem not immediately accessible on booting from typical Linux live stick, etc?
          2) Properly integrated with rest of kernel subsystems. Even btrfs got some uproar on some thngs, but finally made it, due to the fact it does some things in really custom ways and it happens for good reason. But it is not a case for ZFS.
          3) There're plenty of capable people around of it. Btrfs got quite used and therer're quite some bugs reported and ironed out. Over time it becomes quite a good thing. Let's see if ZFS team can do better than that.
          1. If the distribution ships it, it will be already in the box. The mainline kernel need not be involved.
          2. It already is well integrated. Being mainline would not improve much upon that.
          3. There are numerous capable people around ZFS and they are on more platforms than just Linux. Many bugs have been ironed out of the Linux port because of it.

          Originally posted by SystemCrasher View Post
          ...that does not gives a fsck about widespread usage, usability and other silly crap. So it going to be second-class citizen, like Reiser4. I.e. those who really want it would use it, sure. But it would make a little sense for anyone else. And this speech about lack of support for older kernel versions means ZoL is also going to be nightmare in real-world production environments. Because... because when you run things in production, you do not want brand-new uber-mega kernel or something, etc. It do not have to fell apart in first place, and if it have to, it must happen in well-defined timeframe, when staff can handle it in adequate manner, without interrupting service, etc. That's how policy to stick to frozen versions and only fix absolutely necessary things appears: it causes less breakage. You see, I've seen couple of cases when new kernel version just refused to boot on some machines, not to mention smaller, "less critical" fallouts. This has been attributed to brand new breaking changes in new kernels. And shows us why backports can make sense in production. Yep, it suxx to spend time on old code which is going to trashbin. But if your critical machine fails to reboot after upgrade, it suxx even more.
          ZFS is fairly widespread already and its user base is growing steadily on Linux. I agree about not running "brand-new uber-mega kernel or something" in production in environments, and I think being in-tree would require that to get the latest fixes. There is no way that a third party team is going to do a better job of backporting fixes than the main ZoL developers. We are starting to maintain stable branches. If you want to have minimize changes, run one of them. Do not expect a distribution team to do a better job of backporting fixes.

          Originally posted by SystemCrasher View Post
          And what you've told basically reads as "ZFS is doomed to be troublesome in production". Thanks for information, it would really help me in planning quite some deployments.
          What I am reading is that you would rather run filesystems like btrfs where the developers say that you should always run the latest kernel because of critical bugs that risk data integrity in older releases when at the same time, you think you should not run the latest kernel, even though the developers of in-tree filesystems (particularly btrfs, I have not checked the others) do not take charge of getting bugs fixed for users after they reach mainline. That makes no sense to me.

          If you are dead set on running old versions of code that have not necessarily been patched with the latest runtime fixes, feel free to run an old ZoL release branch and never get updates for that branch. Backporting to release branches is a thing we are starting to do now. At present, old ZoL bugs are typically problems for uptime, not data integrity. If you want less than ideal uptime, go for it.

          If my admission that going mainline would hurt code quality upsets you enough that you would sacrifice both your system's data integrity and your ability to get updates backported by the filesystem's own developers, that is your prerogative. I am not going to push for something harmful for the userbase to keep one guy from running a filesystem that does not do as good a job of protecting data.
          Last edited by ryao; 16 January 2016, 02:09 PM. Reason: Clarify that ZoL is starting to do its old backports to its own release branches.

          Comment


          • #15
            Originally posted by ryao View Post
            * If the distribution ships it, it will be already in the box. The mainline kernel need not be involved.
            I've seen how it performs in *buntus with proprietary GPU drivers. You upgrade distro version. Reboot. Ka-booooooom! Black screen. Twice as much fun if you lose filesystem, eh?

            * It already is well integrated. Being mainline would not improve much upon that.
            I have different experience about this, sorry. When mainline kernel released, it undergoes at least some testing with all major parts involved. It means one fancy thing: if I upgrade whole OS, kernel version upgraded by package manager. This kernel and its built-in features and their interactions are going to be "correct". But it not have to be taken as granted for out of tree things.

            There are numerous capable people around ZFS and they are on more platforms than just Linux. Many bugs have been ironed out of the Linux port because of it.
            From realistic standpoint, I'm going to use Linux in these cases, and not like if I need to transfer multi-disk pools across OSes. So it is "good to have" but hardly a major advantage. Multi-disk pools aren't meant to be tossed between OSes.

            ZFS is fairly widespread already and its user base is growing steadily on Linux. I agree about not running "brand-new uber-mega kernel or something" in production in environments, and I think being in-tree would require that to get the latest fixes. There is no way that a third party team is going to do a better job of backporting fixes than the main ZoL developers. We are starting to maintain stable branches. If you want to have minimize changes, run one of them. Do not expect a distribution team to do a better job of backporting fixes.
            Ideally, I want to run distro of my choice to reuse my knowledge and have minimal burden on upgrade and maintenance. Fiddling with some 3rd party resources is generally last thing I want to do.

            What I am reading is that you would rather run filesystems like btrfs where the developers say that you should always run the latest kernel because of critical bugs that risk data integrity in older releases when at the same time, you think you should not run the latest kernel, even though the developers of in-tree filesystems (particularly btrfs, I have not checked the others) do not take charge of getting bugs fixed for users after they reach mainline. That makes no sense to me.
            If we take a look on commits, each and every Linux Kernel fixes bugs like that. Both in various filesystems and many other places. Btrfs isn't anyhow special here. So it is generally good idea to upgrade. But there're some other considerations as well. E.g. possibility of fallouts, etc .

            If you are dead set on running old versions of code that have not necessarily been patched with the latest runtime fixes, feel free to run an old ZoL release branch and never get updates for that branch. Backporting to release branches is a thing we are starting to do now. At present, old ZoL bugs are typically problems for uptime, not data integrity. If you want less than ideal uptime, go for it.
            I'm not dead set on old code. I want new code, since it fixes known bugs, optimizes various things, brings new cool features, etc. Yet, CONTROLLED flight is much better than chaotic fallouts. So I want to have timings and estimates. Points where breakage can happen should be known and shouldn't be too frequent to keep system management things down to a minimum. Even better if breakage would not happen even at these points. That's what Debian, Ubuntu, RH, SuSE and other serious distros do, btw.

            It means there should be backports, to get rid of most pressing bugs & vulns in "old" ("frozen") program versions, kernel included. Blindly upgrading to latest versions can cause even more harm by breaking production environments in unexpected ways and require considerable time to investigate and fix. As one of recent examples, kernel 4.4 RCs gave me a major fallout on ARM boards. No production system has been harmed, but I've had nice excursion into memory management to learn why it failed. It happened due to new "security" feature. It looks reasonable, but in my case it false alarmed for some reason. And it took me like 2 days to get idea why kernel faces some errors. Uhm, I configure ARM devices to reboot on major kernel errors, to prevent runaway conditions or unstable operation.

            Hopefully this example gives idea why I do not want randomly land new kernel on production systems and rather prefer to keep flight controlled. And most of time maintainers are supposed to be "goalkeepers" between me and vanilla software. Sometimes I do it myself. But it is exception, not norm. It only happens when I need something special and feel I can handle it better than others.

            If my admission that going mainline would hurt code quality upsets you enough that you would sacrifice both your system's data integrity and your ability to get updates backported by the filesystem's own developers, that is your prerogative. I am not going to push for something harmful for the userbase to keep one guy from running a filesystem that does not do as good a job of protecting data.
            TBH I'm yet to see example of things where going mainline made quality worse. There're opposite examples though. Yet, kernel devs aren't fond of using 'em in this context. But it sometimes happens.

            Comment


            • #16
              Originally posted by pegasus View Post
              Comparison? Sure ... since reiserfs is still the only filesystem capable of handling certain io loads. Just ask the fastmail.fm guys.
              Interesting. There are supercomputer installations out there with Lustre using ZFS, with 55 Petabyte storage and 1 TB/Sec bandwidth. There are large ZFS servers out there with thousands of disks, or even populated with flash or ssd disks, reaching millions of IOPS. I think the fastmail.fm guys conclusion that ZFS can not handle large workloads is strange.

              Comment


              • #17
                Lustre works nice if you have BIG files. Handling lots of small files slows it down to the point of uselessness. Its weakness is a single metadata server that handles the whole filesystem. One solution is to use ZFS to carve out subvolumes and assign separate metadata servers to each. This is the direction lustre development is now taking.

                Mail spool usually looks like millions of small files spread over thousands of directories. In such use cases reiserfs still wins. ZFS slows down, forgots to deallocate disk space on delete and then takes 36h to come up on reboot. First hand experience ...

                Comment

                Working...
                X