Announcement

Collapse
No announcement yet.

Btrfs Gets Talked Up, Googler Encourages You To Try Btrfs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #71
    Originally posted by ryao View Post
    ZFS does not have that problem. If you had an issue, you likely should have gotten in touch with the developers.
    I did. The FreeNAS people were helping me with my server, and they told me it was too hot and that I had to power it down and take it offline. Months later, when I could afford hundreds of dollars to address the heat issues, I put things together but my RAIDZ2 wasn't importing. I took this to the FreeNAS people and they told me that ZFS is volatile, and that even if my server had been powered off, I still (somehow) needed to rub scrubs because ZFS can't handle not being in active use.

    The server wasn't flash-based, had 6x3TB drives connected to motherboard SATA points, with no hardware raid controllers or anything. I built it to the recommended standard, with the exception of ECC RAM which I couldn't afford.

    Comment


    • #72
      As for changing license: Linux was here way before CDDL/ZFS and Sun probably intentionally made it incompatible with GPL in hope to reduce competition from Linux to their Solaris. But now we can see: where is Sun and their Solaris? And where is Linux? Sun plan status: FAIL. And Linux is strong enough to have its own way rather than resorting to being some footpad. Its especially interesting to take a look on FreeBSD guys who ditched GCC in favor of clang due to GPL, but then they do not mind using ZFS under CDDL, while CDDL is weak copyleft (what about traditional BSD mumblings about perfect freedom, huh?), which somewhat similar to GPL in overall idea but totally Sun-inclined and incompatible with GPL. What a double standards. Should I say FreeBSD people are looking like bunch of pathetic losers after such moves?

      And disabling CoW is ONLY meant for cases when program on top of filesystem does journalling/cow on its own. This usually means something like database (probably primary usecase, since btrfs has been architected at Oracle) and VMs (they just do CoW on their own to be able to do snapshots). Disabling CoW for usual data == shooting your own legs (no journalling, means inconsistent data/metadata and many strange things resulting from data corruption).
      Last edited by 0xBADCODE; 22 August 2014, 10:39 AM.

      Comment


      • #73
        Originally posted by 0xBADCODE View Post
        Its especially interesting to take a look on FreeBSD guys who ditched GCC in favor of clang due to GPL, but then they do not mind using ZFS under CDDL, while CDDL is weak copyleft (what about traditional BSD mumblings about perfect freedom, huh?), which somewhat similar to GPL in overall idea but totally Sun-inclined and incompatible with GPL. What a double standards. Should I say FreeBSD people are looking like bunch of pathetic losers after such moves?
        Let's be fair here, ZFS on FreeBSD has been an option since 2008 with the release of version 7, however FreeBSD still defaults to UFS still 6 years after the fact. Further they waited quite a while until clang was finally in a good enough state for their purposes before they made the switch to it. Also the only CoW filesystem that is permissively licensed at this point is HAMMER and its successor HAMMER2 which they'll probably adopt when it's ready.

        Comment


        • #74
          Originally posted by drSeehas View Post
          I know, but what is the problem with not distributing ZFS with the kernel?
          Well, either you don't include it in a distribution, and the user has to download it separately as a kernel module, which is not very practical, especially for, say, a default filesystem,
          or you use it as a userspace component, which results in huge losses in performance for such a low level component as a filesystem.

          Comment


          • #75
            Originally posted by Tired_ View Post
            I did. The FreeNAS people were helping me with my server, and they told me it was too hot and that I had to power it down and take it offline. Months later, when I could afford hundreds of dollars to address the heat issues, I put things together but my RAIDZ2 wasn't importing. I took this to the FreeNAS people and they told me that ZFS is volatile, and that even if my server had been powered off, I still (somehow) needed to rub scrubs because ZFS can't handle not being in active use.

            The server wasn't flash-based, had 6x3TB drives connected to motherboard SATA points, with no hardware raid controllers or anything. I built it to the recommended standard, with the exception of ECC RAM which I couldn't afford.
            I don't think that's possible. Standard PC components are either really volatile (RAM) and they will be lost immediately after shutdown, or they are non-volatile (disks), and will stay as is until the hardware fails. I cannot think of anything that would store data for a limited amount of time only, except specific hardware implementations (data caches and stuff), but that would happen regardless of software, and should not happen in a standard shutdown.

            Comment


            • #76
              Originally posted by Tired_ View Post
              I did. The FreeNAS people were helping me with my server, and they told me it was too hot and that I had to power it down and take it offline. Months later, when I could afford hundreds of dollars to address the heat issues, I put things together but my RAIDZ2 wasn't importing. I took this to the FreeNAS people and they told me that ZFS is volatile, and that even if my server had been powered off, I still (somehow) needed to rub scrubs because ZFS can't handle not being in active use.

              The server wasn't flash-based, had 6x3TB drives connected to motherboard SATA points, with no hardware raid controllers or anything. I built it to the recommended standard, with the exception of ECC RAM which I couldn't afford.
              That seems very weird. My own experience with a 3x4TB zfs which spend most of the time turned off (are for backups and other long-term storage) had no issues at all so far. I even did a test rebuild and manually offlined some of them to test recovery and all's still well.

              I use zfs on my crappy desktop linux machine with 4gb of ram.

              And not to say you're wrong, but your story is pretty weird. I never even heard of a file system that is volatile. Hard disks fail and may lose data, of course, but that's the device itself. I have hard disks that haven't been powered for 10 years and sometimes I turn one or two on and they still work and don't have any signs of lost data.

              Comment


              • #77
                Originally posted by Tired_ View Post
                I did. The FreeNAS people were helping me with my server, and they told me it was too hot and that I had to power it down and take it offline. Months later, when I could afford hundreds of dollars to address the heat issues, I put things together but my RAIDZ2 wasn't importing. I took this to the FreeNAS people and they told me that ZFS is volatile, and that even if my server had been powered off, I still (somehow) needed to rub scrubs because ZFS can't handle not being in active use.
                You clearly misunderstood something. Scrubs can save you from errors even on really fucked up disks to a degree, but any other fs without checksums would just spit you a damaged data. If your pool won't import, not being in active use is not a reason.

                Comment


                • #78
                  Originally posted by Stellarwind View Post
                  You clearly misunderstood something.
                  I hoped that that was the case. Perhaps, out of the goodness of your heart, you could look over my threads with the FreeNAS people and help me make the connection. they are:
                  http://forums.freenas.org/index.php?...t-to-do.11392/ and http://forums.freenas.org/index.php?...-import.21896/

                  Comment


                  • #79
                    Originally posted by Tired_ View Post
                    I hoped that that was the case. Perhaps, out of the goodness of your heart, you could look over my threads with the FreeNAS people and help me make the connection. they are:
                    http://forums.freenas.org/index.php?...t-to-do.11392/ and http://forums.freenas.org/index.php?...-import.21896/
                    I read both of them and it seems to me that pool failed due to bad cables and/or overheating. You have raidz2 which can survive 2/6 drives failure, you basically had at least four with issues, even wiped one clean. You had metadata errors at the very beginning already, you could hope that getting additional drives in the pool would correct them, but it didn't - I assume those drives were offline for a while.

                    What likely happened is you ran system overheated for a long time, disks started going bad, two got kicked out because of cabling, another had issues as well and you didn't even notice until there were errors and it was too late, corruption already happened.

                    Cyberjock comments about hardware aging are not zfs related, more like everything breaks if left without care for too long. Besides, I don't think that normal working drives would somehow loose data if left without power for a few years. Either way it seems to me your pool was corrupted from the beginning and it has nothing to do with it being off too long.

                    Comment


                    • #80
                      For a record, I had 2x150Gb drives back in the days for a windows partitions that exceeded temperature limits (it was extremely hot summer here) and that raid failed as well.
                      After reformatting those drives they worked fine and show no problems besides a warning in smart, so disks might be ok, but the data is lost and it has nothing to do with a filesystem.

                      Comment

                      Working...
                      X