Announcement

Collapse
No announcement yet.

ZFS On Linux Runs Into A Snag With Linux 5.0

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by aht0 View Post
    When you are buying hardware for yourself, unless you are woman or gay - you don't buy tech based on how it looks but based on what's in it.
    That remark is completely unnecessary, and doesn't help make your point.

    Originally posted by aht0 View Post
    In order to determine "volume_name" just type
    Code:
    mount
    Or just
    Code:
    zfs list
    Originally posted by aht0 View Post
    FreeBSD's ZFS has built-in TRIM by the way. No clue about ZoL. Been there like forever now.
    It's been in the works for a while now.

    Originally posted by aht0 View Post
    I can't even understand why you'd whine about defragmentation.
    ZFS isn't so bad with fragmentation, especially if you add a SLOG. My (maybe 1-year old, I re-created it recently) pool is at 1% fragmentation.

    Comment


    • Originally posted by GrayShade View Post
      That remark is completely unnecessary, and doesn't help make your point.
      I felt it was necessary. I really do think buying expensive but limited hardware when it could be avoided with some forethought ain't something to brag with, especially doing it in a sneering manner while implying some particular item/feature/whatever (in this case ZFS) is next to useless or inferior just because of his particular use case. Use case which was based on thoughtless buying decision to start with.

      Thanks for the rest of the data bits. I have OpenZFS primarily on SSD's, so I haven't bothered with fragmentation.

      Comment


      • Originally posted by aht0 View Post
        I felt it was necessary. I really do think buying expensive but limited hardware when it could be avoided with some forethought ain't something to brag with, especially doing it in a sneering manner while implying some particular item/feature/whatever (in this case ZFS) is next to useless or inferior just because of his particular use case. Use case which was based on thoughtless buying decision to start with.
        And that's a good way to put it (and I agree). But there's no point in picking on women or other social categories. You're only showing how ZFS users are misogynistic assholes, or whatever.

        Comment


        • Originally posted by GrayShade View Post
          And that's a good way to put it (and I agree). But there's no point in picking on women or other social categories. You're only showing how ZFS users are misogynistic assholes, or whatever.
          Don't really care. Political correctness could be chased until it ends up distorting the very perception of reality, it does not affect it. There are probably people thinking God racist because snow is white.
          Simple facts of life remain: women base their buying decisions mostly on appearance - very few are techno-savvy and care about actual specs, gay males are much the same.

          If I have to self-censor every thought through the "political correctness" filter, this Western society ain't better than Soviet Union was. Differences are only in what's allowed or not. Sorry, I was born in USSR, present ideas/practices of political correctness are simply repugnant. I want nothing to do with that bs.
          Last edited by aht0; 01-28-2019, 11:58 AM.

          Comment


          • Originally posted by aht0 View Post
            Fragments are utterly meaningless concept in this modern era of SSD's.
            Not everyone has your disk storage needs, so no.

            Comment


            • Originally posted by Weasel View Post
              Not everyone has your disk storage needs, so no.
              Regardless, eventually mech drives go extinct, except for pure archival purposes.

              Trends are such that smaller SSD's are already quite cheap, with bigger and bigger ones coming into market. Besides, mech drives have reached their physical limitations. There's nowhere to go..

              Comment


              • Originally posted by aht0 View Post
                Then why the previous whine about having only single disk earlier? When you are buying hardware for yourself, unless you are woman or gay - you don't buy tech based on how it looks but based on what's in it.
                When I buy laptop for myself, I do care about battery life, weight and good Linux HW support and overall bang per buck as high priorities. Turning that into datawarehouse isn't high priority. If I would need decent storage, more suitable hardware would do it much better anyway. For laptop it happens to be just utterly pointless and inefficient waste of money to buy expensive exotic crap. It hardly justifies itself. Two drices are cool. Would it have at least ECC RAM to ensure corrpution does not gets through? Are there laptops and laptop RAM with ECC support?
                Non-Windows users especially so, because Linux hardware support has not been always good. Roll back 15 years and getting Linux graphical desktop working was major pain in the ass. Of course, could be, you are just kid pretending to be an adult and your laptop was gift from parents, you had no say-so in process.
                Look, adult man who seems to be unable to read and follow the line: I ALREADY OWN LAPTOP. And I'm more or less fine with its specs for its jobs. Btrfs just adds some little extra value to my existing hardware. Demanding me to buy yet another laptop to make ZFS happy or prove I'm not women or gay and adult man is so... childish . Sorry, but it just not going to happen. Little added value for nothing is so neat. Exotic uber-special configurations at uber-special price ... is what I always disliked about Sun's way of doing things. You can mumble whatever shit you want but I've checked local store and, honestly >1 drive in laptop is still hell a lot exotic setup.
                Actually, your case is a revulsion caused by ideological NIH syndrome combined with total lack of experience.
                My case is simple: I want system management to be easy, logical, reasonable and do not eat my brain where it not supposed to. Bothering me with demands of special configurations only going to achieve one thing: if I still had that thing, I would get rid of it. I do believe good things should cope with commodity hardware, including arbitrarily chosen one.
                ZFS single drive install on a laptop, easiest way:
                But it wouldn't protect neither data, nor metadata reasonably on single-drive system. And when it comes to demands to buy "right hardware", oh, I would rather join womens and gays. As long as it makes life cheaper and easier I'm fine with it. And expecting everyone to drop their existing HW and go buy something else is naive to say the least.
                That's all there is to it. Probably too difficult for WinLinux fanboy without GUI though.
                I'm not a GUI fan. However it does not imply I would jump like crazy and go replace my existing HW just because some fanboy tells me my HW is crap. Look, if some thing supports my HW reasonably and another thing does not - I would consider it advantage of 1st thing and disadvantage of 2nd thing. As simple as that.

                And it's a equally bullshit response, because the reason I even talked about eeePC was to point out ZFS can run even on it. These things are barely faster than shitty Raspberry Pi's are. And yet, here you are, crying crocodile tears claiming it does not run on your little laptop. Get a grip on reality, dude.
                In reality I've used btrfs even on 256Mb RAM ARM SBCs and 64MB RAM MIPS routers. That's how I define good HW support. So if eee is weakest HW you can imagine, you've got very poor imagination to my taste.
                Fragments are utterly meaningless concept in this modern era of SSD's.
                Only to some extent. Parsing truckload of metadata still wouldn't improve performance for sure. And if you have truckload of fragments, you have truckload of metadata to describe where to get them. Oh, maybe not a big deal on filesystem that can't even afford "real" extents. But overall it could be noticeable on modern designs with large extents.
                FreeBSD's ZFS has built-in TRIM by the way. No clue about ZoL.
                Trim got nothing to do with fragments. Furthermore, when file is finally removed, and trim request hits flash translation layer of SSD it is very funny question what would happen and if it would be something good. Trimming large contigious areas surely good, it allows SSD to prepare eraseblocks easily, etc. But if it some fragmented spaghetti, SSD should go for far more complicated RMW & GC sequences in its firmware, more dangerous in terms of powerloss/crash and taking plenty of time, or ignore request.
                Been there like forever now. I can't even understand why you'd whine about defragmentation. Even enterprise is slowly migrating to SSD's.
                They do. Where it works better for them. I do the very same. Price per gig on SSD is nowhere close to HDDs - so HDDs still make a lot of sense to store relatively "cold" data. On side note, my runs of btrfs on SSDs shown it quite "SSD-friendly" in terms of wearout (amount of writes, tracked via SMART).
                Last edited by SystemCrasher; 03-02-2019, 12:22 PM.

                Comment


                • Originally posted by GrayShade View Post
                  ZFS always stores multiple copies (I don't know how many) of the metadata since it deems them to be too important.
                  "Always" isn't terribly good idea either. Look, I could use filesystems in vastly different scenarios. Say, if I do seed torrent from that single board computer, I do not really care about filesystem state. Should it send bad data, it would be detected on application layer by others. Should OS go nuts, it would be reimaged in like ~2 minutes. At which point I don't really like 2 or more metadata copies as something mandatory. It would be just pointless slowdown.
                  If you have more than one drive, it will split them across the drives. Regardless of that, you can also configure it to store multiple copies of the data. It doesn't by default, but it's a single command away.
                  Haven't ever seen examplles how to configure metadata redundancy. Data is another story. Let us assume data and metadata may have different value. Yet this may or may not be true under particular scenario.

                  As for using a single disk, you can do that just fine when you set up your pool. You can start with one and add a mirror later if you feel like it.
                  But I can't mirror existing data and/or metadata on same disk, if that disk is only thing I get as far as I understand.

                  What ZFS isn't great at is reconfiguring the pool after it's created. You can expand it, and you can add mirrors, but you can't e.g. switch from a mirror to a RAIDZ pool. You also can't remove drives from the pool, although I believe that's being worked on.
                  That's what I like about btrfs. Even if I don't like current setup, it can be reconfigured to do something else. Without stopping storage and so on. I think it pretty much awesome idea. Just as well as ability to add arbitrary drives I've got - and get them as abstract "free space" that doesn't really cares on its own what storage scheme would be used on top of that. I consider this approach big win in terms of management woes.

                  As for md and somesuch, well, I don't really like it. TBH I got idea what I REALLY don't like is idea of static allocation of RAIDs. It makes management pain in the rear and reconfiguration turns into horror and hell. Btrfs is meant to be better than that. And speaking for myself it feels like an improvement in this regard.

                  I haven't tried btrfs, but AFAIK it doesn't have a stellar history of not eating your data (regardless of your personal experience with it). When I built my NAS I wanted it to do a good job at not losing data. In exchange, I paid the cost in learning about it, and using an out-of-tree module. I don't regret that trade-off at all.
                  From pragmatic standpoint, btrfs haven't lost my data and looking on commits and overall state here and now it does not looks anyhow scarier than any other Linux file system to me in terms of code quality, bugs fixed and so on. Sure, shit could happen - but it could happen just as well in any other code, just read commit messages to get the idea . I guess if ZFS devs aren't all about marketing their commit messages could be fun as well. Just as EXT4, XFS, or whatever. Fortunately these days shit hits fan either in very unusual configurations or with very low probability. That's what backups for. Whatever, I don't believe in super-reliable storages and "should never happen". So I have plan B, just in case. And this time B stands for Backups.

                  Comment


                  • Linus Torvalds' second in command, Greg KH, had to say the following about his views about ZFS On Linux with the current issue at hand:
                    My tolerance for ZFS is pretty non-existant. Sun explicitly did not want their code to work on Linux, so why would we do extra work to get their code to work properly?
                    Oh I don't know...because ZFS is a wildly popular filesystem that people who use your code rely on for mission-critical data storage?

                    Because Btrfs has been out for ten years and is still not ready for prime-time, and Linux users desperately need the reliability and dependability of ZFS?

                    I gotta say, the toxic dude-culture of the kernel team is sickening. I know this quote doesn't define Greg or anyone else, but it is a fitting example among countless, of a broader toxic culture. I'm surprised #MeToo hasn't caught up with Linus, because people that talk the way he (still) talks, and treats people the way he does even if online, are often raging a--holes IRL. It's not that I think my decision to use free programs, or not, would change anyone's behavior. (And as a minor FLOSS contributor, they aren't all "free" for me, technically.)

                    Speaking only for myself, I've been unable to switch away from Windows (or MacOS) for the last 15 years...not because of lack of apps, compatibility, etc. But because of the toxic kernel culture. My choice, no one has to agree. But I know for a fact I'm not alone. (How much not alone, I have no idea.)

                    Don't tell me Linus has turned a corner and things are changing. Grade-A BS. That kind of wholesale change only happens when nearly everyone is replaced almost all at once, and/or die off. That's why major cultural change only happens on the timescales of generations.

                    Comment


                    • If you use IBM filesystems then dont be surprised there are problems. I use FAT32 for my root and home partition with my debian+XFCE desktop and I never had any issues.

                      Comment

                      Working...
                      X