Announcement

Collapse
No announcement yet.

ZFS On Linux 0.8.1 Brings Many Fixes, Linux 5.2 Compatibility Bits

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by oiaohm View Post

    I have quoted one bug. But many bugs in the bugzilla are reported like that without the developers in fact finding the problem. Fun part is it the problem is coming from the fact modern day ssd and newer harddrives have larger blocks than ZFS is controlling. So you write 4 kb but in fact 4 megs of data have been transferred up to cache and cleared out of storage before being rewritten. So there is a window when 4 megs of data containing only 4kb of new data can magically disappear.

    Cow does not help you when it losing a shot gun spread of data. Raid5 and Raid6 rebuild checksums help with this shot gun problem.

    The result of the block size being different to what the storage media is in fact using.

    http://codecapsule.com/2014/02/12/co...-benchmarking/


    Hardware raids do have a stack of horrible tricks around these problems. This is the problem losing power to a harddrive or ssd while it mid write moving data around basically can shot gun blast you data storage taking out a mixture of new and old data. Cow is not strong enough to resist that. Raid 6 double-parityRAID was not made for no good reason. Raid rebuilds taking ages after power outage is not for no good reason either. They are design with the idea you have not powered down correctly your data has been shotgun blasted and could be missing many pieces.

    The idea that important storage can be without a UPS because you have ZFS is fatally wrong. The low level is a total ass.


    Device-mapper level is able to do block level encryption+ compression+ dedup. VDO from redhat for dedup and compression to then remote by iscsi and other methods by the levels under it.

    So yes I understood exactly what you said you are clueless on how much device mapper on Linux can in fact do.
    Lol you don't have a clue. Your using 1990's filesystem conventions.

    ZFS has a variable block size and it's tuneable. Not that it is a problem with "shotgunning data" or whatever the hell that is (ZFS transaction groups?) as ZFS also has a checksum on every block (even the metadata). : roll eyes : Look into how ZFS block pointers work. Yes.. I can see why the dev's would ignore sooo many bugs, but you can try it yourself.. format a USB stick with ZFS and see if you can get your data to die by unpluging it. (maybe you can I don't know.. good test. I'll bet you money it is more resilient than every other file system though.)

    RAID-6 is obsolete and depreciated (or it will be very soon) RAID-7.3 is the successor. (most do nested levels though for performance, striped mirror configurations)

    It's senseless to explain this to you though. You have made your point very clear. You're a home user and you like stroking the penguin. Good for you.

    Comment


    • #42
      Originally posted by k1e0x View Post
      Lol you don't have a clue. Your using 1990's filesystem conventions.
      Shot gunning is not talking about the filesystem.

      Originally posted by k1e0x View Post
      RAID-6 is obsolete and depreciated (or it will be very soon) RAID-7.3 is the successor. (most do nested levels though for performance, striped mirror configurations)
      https://searchstorage.techtarget.com...ependent-disks
      Raid 7 does not replace Raid6 its to replace Raid 3 and 4. This proves you don't have a clue on low level. Going to raid 7 drops you back to the raid 4 single parity all collected together. Raid 6 replacement is mostly likely going to be Raid 8 but that is not set in stone yet.

      Originally posted by k1e0x View Post
      It's senseless to explain this to you though. You have made your point very clear. You're a home user and you like stroking the penguin. Good for you.
      I am a data recovery person. I have seen ZFS destroyed by the SSD shotgunning. I have also been playing with the new HDD that are not on the market yet. Test samples go out to data recovery firms before product releases. So we can tell you how in drives you will not be able to get for year will ruin your data storage.

      Originally posted by k1e0x View Post
      format a USB stick with ZFS and see if you can get your data to die by unpluging it.
      LOL moron home user answer. USB stick is different to a internal SSD drive on how aggressive the wear levelling is. The test with USB would be put a internal rated SSD in a USB dock/caddy and be unplugging it. O yes this done enough times will break ZFS in fact any file system with writes and that going on the. The partition type identification sectors can disappear even that they never get written just due to the wear levelling inside the SSD defragmenting and the required rewrites due to flash fade.. When I say old data disappears in shotgunning it random data across the complete history of the drive. I like how people say ZFS checksumming is detecting bitrot a lot of cases it detecting drives losing power so not completely their write cycles because the users are having too much faith running solutions that should have a UPS without one because they are using ZFS. Yes ZFS is sending up smoke signals about problem but cause is being miss attributed.

      (maybe you can I don't know.. good test. I'll bet you money it is more resilient than every other file system though.)
      ZFS does not beat a file system sitting on raid6 for resilience. I mean any including vfat sitting on raid6 beats ZFS for resilience.

      ZFS RAID-Z is based around Raid5 it has single parity data instead of Raid 6 double set. That extra parity data of Raid 6 was added to increase resilience. Like it or not ZFS RAID is weaker than a Raid6 and this is partly because when RAID-Z was designed Raid 6 did not exist. This points to the problem of doing this stuff internally items move forwards.

      Raid 7 is more of a competitor for RAID-Z as a upgraded Raid3/4. RAID-Z in a lot of ways is based around Raid3/4 ideas.

      Comment

      Working...
      X