Announcement

Collapse
No announcement yet.

ZFS On Linux Runs Into A Snag With Linux 5.0

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by itoffshore View Post
    Use the right tools for each use case
    But my hammer is the perfect tool!

    Comment


    • Originally posted by k1e0x View Post

      Speculation. The creators of ZFS are human beings like Matt Ahrens who clearly does want it to run on Linux.. but.. if Linux kernel dev's don't want it.. I'm sure FreeBSD will have no problem stealing Linux's market share with it.
      This is part of why I moved to FreeBSD in the first place. FreeBSD's kernel team isn't led by a dude who really doesn't give two hoots about anyone but himself, or a culture which he created in his likeness. The FreeBSD team tends to be pretty open to working with just about anyone to port software to their platform. They're really a great bunch. On top of that, FreeBSD is far better designed from the ground up, and puts a lot more power in the hands of the administrator.

      On top of that (and here's a major benefit), ZFS is a first-class citizen. ZoL will probably always have to chase Linus and his band of zombies around as they make willy nilly changes to the kernel API, and exported symbols. ZFS on FreeBSD will always be there.

      Comment


      • Originally posted by aht0 View Post
        Only using certain ZFS functionality makes it memory hungry. Generally whine about ZFS ram requirements is just FUD. For dead simple large file storage ram requirements are minuscule, you can get by using 768Mb. Seriously.
        ...which is more RAM that the total available on the Raspberry Pi 1 B+ rev 1.2 that was happily serving files from BTRFS as my tiny home server (until I upgraded to a 3 B+ a couple of months ago).

        Originally posted by pgoetz View Post
        If HBA, yes, you can use md, but this is inadequate for enterprise or even work group scale issues where data integrity is absolutely critical. mdadm will happily report that a RAID 5/6 is "healthy" when even a short smartctl test indicates disk errors. Been there, done that, and was barely able to recover the data from the RAID before replacing the (RAID-certified) disks that had developed unreadable sectors.
        This is why you run also periodic SMART test (using smartd), more frequently short test, less frequently long test (e.g.: nightly, and weekly/monthly). So you can spot anomalies before a hard disk dies.
        Than you can safely use a md RAID6 layer above (which still gives you redundancy while you replace a dead drive)
        Ideally a checksuming file system yet above with also weekly or monthls tests. (BTRFS, for the anti-ZFS trolling factor :-D )

        At workgroup scale, that is enough.

        Originally posted by pal666 View Post
        well, linux is the os for which most apps are developed (lookup android)
        Well, to nitpick : yes, Linux is the kernel currently running most written application, but you'll have to conceed that the Android "I can't believe it's not Java(tm)!" userspace hardly looks like GNU userland (neither like the busybox userland running on the router which all of the above use to get network access).

        Originally posted by skeevy420 View Post
        Like the BSD kernel that Sony uses with their Playstation products? Sony is only the #1 in the world when it comes to game consoles and they use a non-gpl open source kernel.
        And Apple also uses a BSD server above ther mach microkernel as part of their Mac OS X and iOS, so again some extra BSD precense.

        But still, to get network access all of them are going to be plugged into a router that runs Linux+Busybox
        (not to speak that if the play station is plugged into a recent TV, the "smart" functionnality is most likely to be provided by a Linux kernel too)

        So even in a world composed exclusively of Sony PS4s and Apple Macs, they still would be outnumbered by the pieces of hardware running Linux.

        Originally posted by aht0 View Post
        Server space, excepting web servers (intranet), is dominated by Windows servers (~60%+ - I am extrapolating it from analytical data applying to Netherlands, which is your average free democratic country, "as good as any other"),
        Depends on your field of work. Research, Academics are stronly dominated by Linux, and they are the one having extremely large clusters.
        Cloud is mostly linux too (Except for a few non conformist running BSD VMs out of spite, and a couple of instance on Azure).

        So if you count on a per-machine (rather than per-business) basis, non-Linux servers are basically a joke.

        A bad joke.

        Originally posted by aht0 View Post
        smart phone business dominated by Android.. will see if Google's Fuchsia would eat it out of the market or not in the future..
        Given all the knowhow embed chipset/boards/phones manufacturer have invested in making drivers for Linux kernels, it's going to be an uphill battle to persuade them to switch to making chipsets and drivers for a different OS.
        With all its warchest, Microsoft wasn't that much successful at making them provide Windows Mobile powered hardware.

        Originally posted by pgoetz View Post
        and are using this in a professional situation? I'd be happy to switch to Btrfs, but can't find any storage admin willing to endorse such a thing. Everyone (but you) views Btrfs as too unstable for serious production use
        *I* am using BTRFS also in professional situations, though on slightly beefier configuration than the above mentionned raspberry pis.

        BTRFS is stable, RAID56 is about the only optional feature that isn't yet. As long as you don't rely on it (use RAD0, 1, dup, etc. or stack it above md) you're safe to go.

        Originally posted by SystemCrasher View Post
        Btrfs also got very decent tesing coverage when Facebook deployed it on their servers. Sure, not all features, just subset of these. I strongly doubt ZoL gets anyhow comparable testing coverage at all.
        Actually, ZFS *is* deployed on some Linux HPCs I know of.

        Originally posted by itoffshore View Post
        is better suited to running vm images than BTRFS.
        ...unless you turn off COW on the virtual disk file. Then it's as much suited as ext4, or anything else.

        Comment


        • Originally posted by aht0 View Post
          (1)First of all, get yourself decent laptop
          Cool, so I'm supposed to change laptop and pay quite some money for.... what exactly? Any measurable gains worth of all these tantrums, money spendings, OS reinstall, etc? For me laptop isn't center of the universe, just one of machines I use. I'm not okay when some tech throws such a bizarre demands on my head like that.

          (2) Ehm, you really-really think ZFS is limited here? zfs set copies=2 dataset OR use 2 mirrored ZFS partitions on a single drive OR combine the two. Or increase the number of copies. OR do everything mentioned and have absurd amount of copies.
          Yes, I do think failure to support busload of existing configurations, including those I do care of, is a limitation. Furthermore, such replies of ZFS sealots is what makes me not really fond of this thing. Look, I like btrfs because it proven to be convenient, does not throws unreasonable demands on my head, and takes existing real-world configurations into account. Somehow I think things should happen this way.

          BTRfs is actually more vulnerable of the two, because it's using crc32c hash trees and makes AFAIK only 2 copies of metadata.
          I do not seek for super-solutions to all humankind problems. While CRC32 surely not perfect and just 2 copies could be a problem if one needs ULTIMATE reliability, it surely not a case on laptop. At the end of day, keeping dozen of copies of metadata on laptop is dumb thing to do for many other reasons. Look, just some funny DC-DC failure inside - and all your dozen of copies of metadata are TOAST. As well as most of electronic components around. Same crap if you just spill water (coffee, tea, whatever). And these failure modes are more likely than getting unreadable metadata on exactly same offsets or getting through CRC32 - both imply I neglected storage failures for a while and if I've been ignorant that badly, I would hardly use any filesystem reasonably anyway. Especially ZFS throwing such a weird demands on my head and making inconvenient assumptions.

          ZFS uses Merkle hash trees and spreads it's metadata around. When you go fully paranoiac you can configure it to be resistant to hundreds of bad sectors. Your drive would probably die long before you get to worry about it, and when your drive suddenly tucks it's tail under it's head and says "good night", no file system can help you.
          Except that it just does not works on single-drive configuration out of the box by easy means. And if it comes to complicated means - damn I would be easier just unrolling OS template and backups should drive fail that badly and need total replacement or something like this. And no, I'm not going to turn my laptop into data warehouse just to keep zfs happy.

          Anyway, bullshit again. I've used ZFS on Asus eeePC netbook, turning off fancy features and I am using ZFS on 2-drive mirror in my current Dell laptop.
          I would agree, its bullshit to start your message with demand to buy "decent laptop" and then mumble something about EEE. Hilarious.

          And in my i7 gaming PC. And it does not have issues of space congestion or fragmentation.
          Uhm yea, except one little problem: CoW inherently implies fragments. Furthermore, VM CoW disk vs FS CoW, DBs, just DLing some torrents and so on may not play well all together. And somehow I prefer technical solution over loud marketing BS. So btrfs got it. Ranging from deduplication/thin provisioning that does not hogs resources (e.g. reflinks) to defrag should mentioned assumption fail to work. Someting ZFS wasn't able to afford.

          Sure, if you can't access your filesystem at all, like topic implies, it could be stable condition, but I think there is some catch.
          Last edited by SystemCrasher; 01-17-2019, 06:55 AM.

          Comment


          • Originally posted by DrYak View Post
            Actually, ZFS *is* deployed on some Linux HPCs I know of.
            I can remember it has been used by something like LLNL or so. However these are some very specialized use cases, they could afford customized distros, plenty of staff to maintain all that, a very specific approach to managing configurations, uncommon HW setup and so on. I do not think I need or want something like this. Btrfs managed to fit my existing use cases with minimal changes and efforts - and I really like when things work this way. ZFS isn't like this. Look, aht0 already offered me idea to replace my laptop. Sure, I'm so excited about solutions like this...

            I do think billion of facebooks users still gives better overall testing coverage vs what few HPC installations could ever afford.

            As for android and java... hum, well, gamedevs aren't big fans of java for some reason. So there is NDK. Though it odd kind of Linux, sure.
            Last edited by SystemCrasher; 01-17-2019, 07:01 AM.

            Comment


            • Originally posted by SystemCrasher View Post
              Except that it just does not works on single-drive configuration out of the box by easy means.
              Have you ever tried to use ZFS?

              Comment


              • Originally posted by DrYak View Post
                .....
                ----------------------------------------------------------------------------------------------------
                Not unexpected at all to see another Linux-supremacist beating his e-dick.
                Inferiority-superiority complex at it's finest.

                Comment


                • Originally posted by GrayShade View Post

                  Have you ever tried to use ZFS?
                  Only as short-term experiment in VM, so my practical experience is verrrry limited and rather negative. Why? I dislike my experience compared to btrfs so I gave up easily and early. Look, btrfs would use "dup" storage scheme for metadata (DUP stores metadata twice into same storage) BY DEFAULT if it spots filesystem consists of single drive. It can do same trick for data if one asks. I have to admit it really sane and friendly default behavior, btw. And its okay to change mind and ask for it later, it could convert storage scheme on the fly if desired. It is transparent and not set in stone. Somehow all guides and use cases around ZFS are encircled on high-profile enterprise and it just does not looks like my configurations. Nor I want to throw as much efforts as enterprise admins would do.

                  There is also certain chance btrfs just mapped better on my ways of thinking and using computers. Somehow I do understand what this thing does and why and find it very logical most of time. I like this feeling.

                  Comment


                  • Originally posted by SystemCrasher View Post
                    Look, btrfs would use "dup" storage scheme for metadata (DUP stores metadata twice into same storage) BY DEFAULT if it spots filesystem consists of single drive. It can do same trick for data if one asks.
                    ZFS always stores multiple copies (I don't know how many) of the metadata since it deems them to be too important. If you have more than one drive, it will split them across the drives. Regardless of that, you can also configure it to store multiple copies of the data. It doesn't by default, but it's a single command away.

                    As for using a single disk, you can do that just fine when you set up your pool. You can start with one and add a mirror later if you feel like it.

                    What ZFS isn't great at is reconfiguring the pool after it's created. You can expand it, and you can add mirrors, but you can't e.g. switch from a mirror to a RAIDZ pool. You also can't remove drives from the pool, although I believe that's being worked on.

                    As for familiarity, the ZFS commands and terminology are a little strange, but -- having used md, I'd take ZFS over it any day. You'll also find a lot of FUD wrt. ZFS (like the oft-repeated advice of having 1 GB RAM / 1 TB of data).

                    I haven't tried btrfs, but AFAIK it doesn't have a stellar history of not eating your data (regardless of your personal experience with it). When I built my NAS I wanted it to do a good job at not losing data. In exchange, I paid the cost in learning about it, and using an out-of-tree module. I don't regret that trade-off at all.

                    Comment


                    • Originally posted by SystemCrasher View Post
                      Cool, so I'm supposed to change laptop and pay quite some money for.... what exactly? Any measurable gains worth of all these tantrums, money spendings, OS reinstall, etc? For me laptop isn't center of the universe, just one of machines I use. I'm not okay when some tech throws such a bizarre demands on my head like that.
                      Then why the previous whine about having only single disk earlier? When you are buying hardware for yourself, unless you are woman or gay - you don't buy tech based on how it looks but based on what's in it. Non-Windows users especially so, because Linux hardware support has not been always good. Roll back 15 years and getting Linux graphical desktop working was major pain in the ass. Of course, could be, you are just kid pretending to be an adult and your laptop was gift from parents, you had no say-so in process.

                      Originally posted by SystemCrasher View Post
                      Yes, I do think failure to support busload of existing configurations, including those I do care of, is a limitation. Furthermore, such replies of ZFS sealots is what makes me not really fond of this thing. Look, I like btrfs because it proven to be convenient, does not throws unreasonable demands on my head, and takes existing real-world configurations into account. Somehow I think things should happen this way.
                      Originally posted by SystemCrasher View Post
                      I do not seek for super-solutions to all humankind problems. While CRC32 surely not perfect and just 2 copies could be a problem if one needs ULTIMATE reliability, it surely not a case on laptop. At the end of day, keeping dozen of copies of metadata on laptop is dumb thing to do for many other reasons. Look, just some funny DC-DC failure inside - and all your dozen of copies of metadata are TOAST. As well as most of electronic components around. Same crap if you just spill water (coffee, tea, whatever). And these failure modes are more likely than getting unreadable metadata on exactly same offsets or getting through CRC32 - both imply I neglected storage failures for a while and if I've been ignorant that badly, I would hardly use any filesystem reasonably anyway. Especially ZFS throwing such a weird demands on my head and making inconvenient assumptions.

                      Except that it just does not works on single-drive configuration out of the box by easy means. And if it comes to complicated means - damn I would be easier just unrolling OS template and backups should drive fail that badly and need total replacement or something like this. And no, I'm not going to turn my laptop into data warehouse just to keep zfs happy.
                      Actually, your case is a revulsion caused by ideological NIH syndrome combined with total lack of experience.
                      ZFS single drive install on a laptop, easiest way:
                      You'd run the installer, reboot the machine upon completion, open the shell, get root and type
                      Code:
                      zfs set copies=3 volume_name
                      In order to determine "volume_name" just type
                      Code:
                      mount
                      which would give you list of mounts. You can do it fine-grained (for example, manual-files are not exactly critical, you don't need triplicate copies) or if you have bunch of spare space, just apply zfs set copies=3 volume name to actual / and every sub-volume would inherent it.
                      Wanna check the results, type
                      Code:
                      zfs get copies
                      That's all there is to it. Probably too difficult for WinLinux fanboy without GUI though.
                      Originally posted by SystemCrasher View Post
                      I would agree, its bullshit to start your message with demand to buy "decent laptop" and then mumble something about EEE. Hilarious.
                      And it's a equally bullshit response, because the reason I even talked about eeePC was to point out ZFS can run even on it. These things are barely faster than shitty Raspberry Pi's are. And yet, here you are, crying crocodile tears claiming it does not run on your little laptop. Get a grip on reality, dude.

                      Originally posted by SystemCrasher View Post
                      Uhm yea, except one little problem: CoW inherently implies fragments. Furthermore, VM CoW disk vs FS CoW, DBs, just DLing some torrents and so on may not play well all together. And somehow I prefer technical solution over loud marketing BS. So btrfs got it. Ranging from deduplication/thin provisioning that does not hogs resources (e.g. reflinks) to defrag should mentioned assumption fail to work. Someting ZFS wasn't able to afford.
                      Sure, if you can't access your filesystem at all, like topic implies, it could be stable condition, but I think there is some catch.
                      Fragments are utterly meaningless concept in this modern era of SSD's. FreeBSD's ZFS has built-in TRIM by the way. No clue about ZoL. Been there like forever now. I can't even understand why you'd whine about defragmentation. Even enterprise is slowly migrating to SSD's.

                      Comment

                      Working...
                      X