Announcement

Collapse
No announcement yet.

ZFS File-System Tests On The Linux 3.10 Kernel

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by PuckPoltergeist View Post
    Self-Healing like creating new? Or is this now fixed in ZFS?

    A filesystem where you can't delete files if it is full, I wouldn't call rock stable. And this was observed on Solaris, not OpenSolaris or Linux. I'm amused about the ZFS-hype still around.
    What are you talking about? Can you link some more information, that would be really helpful. This has nothing to do with hype, I just emphasize data integrity and i can't possibly be aware of every last bug. And what alternative is there? Last time i tried BTRFS you could not even mount by label from grub, so if a disk fails in a RAID you can't even boot anymore. That's not production ready for me and i don't even have to read a bug tracker to notice that.

    Comment


    • #17
      First: make sure IO scheduler is no-op
      Second: turn of readahead

      the second one delivered an enormous performance boost.

      But... hey, phoronix benchmarked ext3 with barriers off because that was 'default' - and it was default to look good in benchmarks....

      Comment


      • #18
        By default, fs_mark writes a bunch of 0's to a file in a 16k chunks, calls close followed by an fsync followed by a mkdir call.
        Code:
        [pid 13710] write(5, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 16384) = 16384 <0.000020>
        [pid 13710] fsync(5)                    = 0 <0.033889>
        [pid 13710] close(5)                    = 0 <0.000005>
        [pid 13710] mkdir("./", 0777)           = -1 EEXIST (File exists) <0.000005>
        From my observations, fsync is slightly more expensive for zfs, and this is where you see the hit in the fs_mark benchmarks. With "sane/real-world" amounts of fsync calls and possibly a few other tweaks, ZoL is an extremely fast, stable, feature-rich, production-ready file system. Many people are using it with Linux and having great success.

        A 5-disk raidz1 pool:

        Read Sample:
        Code:
        dd if=sample1.mkv of=/dev/null  bs=1M
        4085+1 records in
        4085+1 records out
        4283797121 bytes (4.3 GB) copied, 15.3844 s, 278 MB/s
        Write Sample ( read from ssd ):
        Code:
        time dd if=/root/sample2.mkv of=test bs=1M; time sync;
        9428+1 records in
        9428+1 records out
        9886602935 bytes (9.9 GB) copied, 35.6332 s, 277 MB/s
        
        real	0m35.635s
        user	0m0.010s
        sys	0m2.666s
        
        real	0m2.665s
        user	0m0.000s
        sys	0m0.077s

        Comment


        • #19
          Originally posted by ZeroPointEnergy View Post
          What are you talking about? Can you link some more information, that would be really helpful.
          For the recreating of the filesystem, thats from the opensolaris-forum. There were enough postings about damaged filesystems, where three, four or five suggestions were made and the last was to recreate the filesystem and restore the backup. For me that's not production ready. Okay it was OpenSolaris in this cases.
          For not being able to delete files on a filled filesystem, that was a in-house problem.

          This has nothing to do with hype, I just emphasize data integrity and i can't possibly be aware of every last bug. And what alternative is there? Last time i tried BTRFS you could not even mount by label from grub, so if a disk fails in a RAID you can't even boot anymore. That's not production ready for me and i don't even have to read a bug tracker to notice that.
          Didn't tried by label but uuid worked for me. But label should work to. If it doesn't it's a bug that needs to be reported.

          Comment


          • #20
            Originally posted by PuckPoltergeist View Post
            Didn't tried by label but uuid worked for me. But label should work to. If it doesn't it's a bug that needs to be reported.
            Doesn't a uuid references a partition? If that disk is gone you have to edit your configuration and mount the other device in the RAID. With ZFS I can simply reference the pool by name and I don't have to care what disks are involved or available as long as there are enough to assemble the RAID. So far I did not find out how to achieve this with a BTRFS and the wiki doesn't help much here. I don't consider this some strange use case, that's pretty much the first thing you want to do if you use a RAID.

            Comment


            • #21
              Originally posted by smitty3268 View Post
              You mean like this ZFS on Linux test? I guess we should let you know now.
              reading comprehension is a rare gift these days...

              Comment


              • #22
                Originally posted by ZeroPointEnergy View Post
                Doesn't a uuid references a partition? If that disk is gone you have to edit your configuration and mount the other device in the RAID....
                No, the UUID is per file system. Any disk in the array can be mounted via the same UUID.
                For example, here is a 2-disk RAID1 btrfs system (single partition on each disk). In this case it's a 2 disk system,
                but it would work just the same with 20 disks:

                Code:
                blkid /dev/sdb1:
                /dev/sdb1: UUID="04bf1179-a858-4ac9-935b-9279722f6b4a" UUID_SUB="f0983cb3-eb7e-45c3-b086-e58baa798d45" TYPE="btrfs" 
                blkid /dev/sda1:
                /dev/sda1: UUID="04bf1179-a858-4ac9-935b-9279722f6b4a" UUID_SUB="5062890f-f8fe-46da-b90c-68c2bf096403" TYPE="btrfs"
                And the fstab, with seperate subvolumes for root and home:
                Code:
                UUID=04bf1179-a858-4ac9-935b-9279722f6b4a	    /       btrfs   defaults,compress=lzo,autodefrag,subvol=@       	0 0
                UUID=04bf1179-a858-4ac9-935b-9279722f6b4a	    /home   btrfs   defaults,compress=lzo,autodefrag,subvol=@home	0 0

                Comment


                • #23
                  Originally posted by ZeroPointEnergy View Post
                  Doesn't a uuid references a partition? If that disk is gone you have to edit your configuration and mount the other device in the RAID. With ZFS I can simply reference the pool by name and I don't have to care what disks are involved or available as long as there are enough to assemble the RAID.
                  That's what uuid and labels are for. So as benmoran has already explained, it works this way. You must care, that btrfs needs a device scan from userspace (e.g. initramfs) or you must list all devices of the raid for mounting: https://btrfs.wiki.kernel.org/index.....2Fetc.2Ffstab

                  Comment


                  • #24
                    Originally posted by xterminator
                    I love BTRFS, I've both used it in fedora and ZFS on FreeBSD.

                    ZFS is horrible (slow, difficult to configure, no accurate documentation). And it's even more so on FreeBSD. So much so that it seems like FreeBSD was never meant to use ZFS. Worse, FreeBSD forum guys are real douchebags (no offense). I tried getting help from them but all I received was "idiot", "Linux Loser", "STFU, GTFO & RTFM", etc...

                    For BTRFS, it's the complete opposite. Sure it's more easier to corrupt the file system and lose data but you have to remember that BTRFS is still in development even then it's doing really well. It's almost production really.
                    From my experiences, this applies to both filesystems. For both I had to search for proper documentation, both I was able to corrupt easily and had to recreate them, because of no working fsck. Especially as ZFS still claims to not need it (SGI did so with XFS log time ago).

                    ZFS is the choice for Solaris and will stay. Btrfs seems to be the FS for linux in future. Btrfs is newer and suits better for block devices than ZFS.

                    Comment

                    Working...
                    X