Announcement

Collapse
No announcement yet.

ZFS On Linux Runs Into A Snag With Linux 5.0

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Couldn't care less. Invest your time in btrfs instead of this outdated piece of crap ZFS.

    Comment


    • #32
      Originally posted by cygn View Post

      Like oiaohm explained above, those signals are deprecated since 2008. Which is roughly when ZOL has started by the way. So no, this one is not on kernel devs.
      I accept the explanation as plausible. I ask though, what's the probability that ZoL devs simply did not notice depreciation 10 years a go. Does all deprecated stuff gets re-quoted version after version or what? That code is quite a handful in itself, without adding Linux kernel to it.

      Comment


      • #33
        Originally posted by lichtenstein View Post

        Ditto (well, less than 5 but still). I have it on my mini-server mirroring two external usb 4TB drives. I explicitly chose to run it like that instead of going for an external case that mirrors the drives itself. With its checksumming btrfs provides bitrot protection which is why I use it. It's been very stable and fast and I've had no issue with it.
        BTW, I'm have experience with ZFS (on a separate freebsd machine) so I'm aware it provides similar protection but it's a mini-server and ZFS doesn't like "mini" - it needs (lots of) ram to perform well. I could use it on my desktop (ext4 atm) but in order to reap the benefits I really would need more than one drive - otherwise zfs/btrfs can report but not correct errors.
        Only using certain ZFS functionality makes it memory hungry. Generally whine about ZFS ram requirements is just FUD. For dead simple large file storage ram requirements are minuscule, you can get by using 768Mb. Seriously.
        What requires lotsa RAM? Using deduplication and L2ARC. That's it.

        Comment


        • #34
          aht0, thanks for clarifying. I've never used dedup.

          In any case, I'm saying that btrfs is perfectly fine for single disk and for my use case, which is a simple mirror. I've had no experience with raid5, etc. Another reason why I went with it was/is because it's linux native and I expected for it to be better supported and maintained (on linux) than ZFS. Looking at the current issue (and, according to previous posters, similar issues in the past), it looks like it was the right decision.

          Comment


          • #35
            Originally posted by lichtenstein View Post
            aht0, thanks for clarifying. I've never used dedup.

            In any case, I'm saying that btrfs is perfectly fine for single disk and for my use case, which is a simple mirror. I've had no experience with raid5, etc. Another reason why I went with it was/is because it's linux native and I expected for it to be better supported and maintained (on linux) than ZFS. Looking at the current issue (and, according to previous posters, similar issues in the past), it looks like it was the right decision.
            All I have at hand right now is laptop (at work). Same, in mirror like your btrfs machine. Encrypted ZFS mirror (AES-XTS 256bit). Small disks though and not even equally sized. Extra unused space on larger drive is partly occupied by swap file. In Dell Inspiron 17 N7110 laptop (yeah Optimus graphics can work). 2nd drive is sitting in DVD-bay. I've tried multiterabyte arrays in my desktop machine and as long as I keep from using fancy features, RAM is not an issue I bother to think about.

            You can see the memory usage while running Plasma 5.


            Not a lot to be afraid of RAM-wise. Especially in casual usage
            Last edited by aht0; 11 January 2019, 09:20 AM.

            Comment


            • #36
              Originally posted by aht0 View Post
              I think it as a case of upstream breaking the internal API's yet again, not caring in the least how it would affect downstream. Happens all the time. Shit breaks because upstream dev thinks it good to do some minor random change and like a "butterfly effect" bunch of stuff gets broken suddenly downstream.
              Does "Mr or Ms "upstream dev" cares? Not in the least.
              Do upstream developer actually need to care about changing INTERNAL API that has ALWAYS declared unstable to begin with?

              Their pact with downstream has always been clear about what is stable and what is not. This is not a breach of that pact.

              This thing was removed as the last in-kernel user was removed https://marc.info/?l=linux-kernel&m=154689892914091

              Mr "2.nd in command after Linus" seems to be guided here by his own preconceptions rather than anything else - I checked follow-up mails and reached that conclusion. Biggest problem for him seems to be that ZFS originated from Solaris (NIH).
              Ah come on, is that how you read "Sun explicitly did not want their code to work on Linux, so why would we do extra work to get their code to work properly?"
              and the follow up "Sorry, no, we do not keep symbols exported for no in-kernel users."

              Because while I personally don't like this specific case (I'm no ZFS hater), I kind of understand that they have to be inflexible on basic rules and can't ignore license incompatibility.

              If they start making special cases based on personal sympathy then it all falls apart pretty quick.

              Ignoring license incompatibility is also a very bad thing to do, especially since Oracle is the copyright holder and can go do some old-school legal trolling if they feel like there is profit in doing so.

              Not that I actually particularly care, more power to FreeBSD.
              This is a very minor thing that can be fixed on ZoL side.

              Comment


              • #37
                Originally posted by LaeMing View Post
                where are their efforts?
                (Whining that others aren't doing what you want without interest or recompense doesn't count as effort, shocking as it may seem!)
                just a quick shortlist...
                LLNL has contributed to linux plenty, EDAC, lustre, ext4 are three examples.
                Sun gave you NFS.
                Oracle contributes quite a fair bit to linux, even if all they seem interested in is their UEK. They started btrfs.
                I believe Proxmox contributes to debian or ubuntu.
                Last edited by some_canuck; 11 January 2019, 09:51 AM.

                Comment


                • #38
                  Originally posted by GruenSein View Post
                  The question "So why would we do extra work to get their code to work properly?" can be answered quite easily. It is because many people have wanted to use ZFS for a long time and it is one of the most advanced FSs for its purpose.
                  That's not how Linux or opensource in general works. Opensource works by "you want it, you maintain it".

                  People often think opensource is "I work for free for you", but it is not. What is given free is the finished product, if it does not work with your stuff is your own problem.

                  It is not like the Linux kernel crew is asked to do ZOL a favor out of the goodness of their hearts.
                  Actually, yeah it is. There is no reason to make any concession to ZOL as it is not following Linux kernel's rules (it's an out-of-tree module)

                  Also, I don't get why they don't simply restore the symbols that ZOL uses and mark them "deprecated" or whatever until the software depending on it can adapt (assuming there is an actual reason to remove them at all).
                  Because removing them makes very clear that the software depending on it must adapt ASAP and not do it eventually five years down the road when they have some spare time. See above about "you want it, you maintain it".

                  Still, this is blown a bit out of proportion. Kernel changes to internal API aren't a new thing, people using ZOL or any other out-of-tree module don't usually expect it to support the new Linux kernel version on release day as there WILL BE breakage, the overwhelming majority of the users will be using a distro with a LTS kernel where ZOL will work fine for years still.

                  Comment


                  • #39
                    Originally posted by aht0 View Post
                    Only using certain ZFS functionality makes it memory hungry. Generally whine about ZFS ram requirements is just FUD. For dead simple large file storage ram requirements are minuscule, you can get by using 768Mb. Seriously.
                    What requires lotsa RAM? Using deduplication and L2ARC. That's it.
                    The "high amount of RAM" needed is for work-like loads, mostly for caching (L2ARC) and stuff which is necessary when you want to compete with hardware RAID cards on database servers or something.

                    For basic home server/NAS use you can get away without a large RAM cache fine.

                    I have a strong suspicion that most FreeNAS people are heavily overspeccing their systems the same way that gamers add RGB lights and fans.

                    Comment


                    • #40
                      "My tolerance for ZFS is pretty non-existant. Sun explicitly did not want their code to work on Linux, so why would we do extra work to get their code to work properly?"

                      What someone who doesn't have to deal with real users and real world workloads might say. For at least a couple of my projects, ZFS is by far the best solution. Disappointed to hear stuff like this uttered by high level kernel developers.

                      Comment

                      Working...
                      X