Announcement

Collapse
No announcement yet.

ZFS On Linux With Ubuntu 12.04 LTS

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    It is rather sad that a number of your benchmarks were slower on the SSD than the HDD.

    You need a better SSD. OCZ Solid 2? Ughhh. There is no excuse for an SSD being slower than an HDD with the SSDs that are available today.

    Comment


    • #12
      A thank you

      Please accept my thank you, despite my criticism, for the hard work and effort you put into covering a wide range of topics, reviews, benchmarks and news tid-bits.

      Comment


      • #13
        Honestly, you are one of very few sites that actually monitor and test Solaris-derived technologies and report on these, which shows a level of maturity. But that brings with it responsibility for correct testing and reporting. I am frustrated that Phoronix, one of the few sites that report on Solaris/BSD/ZFS etc, DOESN'T DO IT RIGHT!
        A lot of details have to be taken care of when doing such benchmarking, and methodology needs to be public and detailed also as some notes like if caching is relevant or not (or it won't be as informative as the article could be) ... last time Michael did benchmarks on this subject, he was guilty of the creation of this wiki → http://wiki.freebsd.org/BenchmarkAdvice

        In any case, those numbers still give some kind of idea of the state of ZFS vs Btrfs (yes, ext4 is put there as a reference number, it doesn't play in the leagues of those FS ) ...

        ---

        Michael, could you please make a small article about ZFS features? they got incorporated in FreeBSD not-so-long enough, and it would be nice to mention it.
        Here's the IllumOS presentation about the feature→ https://www.youtube.com/watch?v=REzvy59jQnw

        And btw, there are some patches around to support the famous boot enviroments from Solaris, it's still a work-in-progress and some people even talk about firing off a different OS (like IllumOS) ... I'm having problems to find that discussion on the mailing list, but as soon as I find it, I'll post it. // Edit → Here ( http://docs.freebsd.org/cgi/getmsg.c...reebsd-hackers ), there's also manageBE
        Last edited by vertexSymphony; 27 June 2012, 07:35 PM.

        Comment


        • #14
          Originally posted by hartz View Post
          It is not the CDDL that restricts license type mixing, it is the GPL (as evidenced by the fact that a: everything in the linux kernel must be GPL, and b: ZFS under CDDL is already included with the BSD kernel and Mac OSX.)

          I wish Linus could change the linux kernel to be under a more liberal license. But alas I believe even if he wanted to, he couldn't because, as far as I understand it, he would have to get permission from all people who have ever contributed anything to the Linux kernel.
          This is largely semantics.

          The facts are:

          1. The linux kernel has existed under GPL2 for a long time and clearly isn't (probably can't, given all the different contributors) changing.
          2. The later on, Sun came and created a license that was incompatible with the GPL

          It's quite clear that incompatibility is the entire reason for the CDDL's existence. If the linux kernel was compatible, Sun simply wouldn't have open sourced ZFS, or they would have created another license that was incompatible with whatever other license linux was using.

          Comment


          • #15
            Originally posted by hartz View Post
            It is not the CDDL that restricts license type mixing, it is the GPL (as evidenced by the fact that a: everything in the linux kernel must be GPL, and b: ZFS under CDDL is already included with the BSD kernel and Mac OSX.)

            I wish Linus could change the linux kernel to be under a more liberal license. But alas I believe even if he wanted to, he couldn't because, as far as I understand it, he would have to get permission from all people who have ever contributed anything to the Linux kernel.
            Hold on a second. CDDL license was designed to be incompatible with GPL.
            Thirty years ago, Linus Torvalds was a 21 year old student at the University of Helsinki when he first released the Linux Kernel. His announcement started, “I’m doing a (free) operating system (just a hobby, won't be big and professional…)”. Three decades later, the top 500 supercomputers are all running Linux, as are over 70% of all smartphones. Linux is clearly both big and professional.

            Comment


            • #16
              Hold on. If ZFS beats the heck out of competition, the results are too good to be true and are suspect, and may be the ZFS code is doing the wrong thing? And if ZFS loses out, its not good enough?

              What a load of crap! Use and learn ZFS first, and then test it. Compare the performance of its features with other filesystems.

              Comment


              • #17
                Originally posted by phoronix View Post
                Phoronix: ZFS On Linux With Ubuntu 12.04 LTS

                It has been a while since last benchmarking the ZFS file-system under Linux, but here's some benchmarks of the well-known Solaris file-system on Ubuntu 12.04 LTS and compared to EXT4 and Btrfs when using both a hard drive and solid-state drive.

                http://www.phoronix.com/vr.php?view=17546
                I believe that I can explain why these results are "too good to be true". The performance differences in read I/O benchmarks are the result of the page replacement algorithms being exployed. ZFS has ARC while ext4 and btrfs have LRU. ARC is more conservative than LRU when flushing its cache, which should boost performance in general.

                The situations where ZFS outperforms its competition in Michael's testing are solely the result of ARC being used in place of LRU. The situations where ZFS does not do quite as well, are likely the result of any of the following:

                • They heavily favor LRU. Whenever you try to make your algorithms more intelligent, you will have cases where you make things worse. The idea is to make the worse in the least useful areas.
                • They are caused by inefficiencies in the LLNL ZFS code. Currently, we do double locking. One set of locks is from Solaris and another is from Linux. These will eventually be consolidated, but there is a performance penalty.
                • They are write intensive, in which case, you should be using a SLOG device to sequentialize transaction group commits. ZFS provides awesome write performance if you are willing to configure your hardware properly. Without that, you will have mediocre write performance.
                • ashift was not set to proper sector sizes, which gives ZFS an additional performance penalty. I have been told that this effect can be measured to be 40% in some tests.


                ZFS is a replacement for not only a filesystem, but a logical volume manager, RAID and a write cache. However, Michael is testing it as if it were a filesystem from the 1980s, which is wrong. ZFS is superior to its competitors in terms of both reliability and performance per dollar, yet none of that is being tested. Michael should redesign his tests to conform to ZFS, rather than expect ZFS to conform to his tests.
                Last edited by ryao; 28 June 2012, 04:20 PM.

                Comment


                • #18
                  Originally posted by finalzone View Post
                  Hold on a second. CDDL license was designed to be incompatible with GPL.
                  http://kerneltrap.org/node/8066/
                  Perhaps it was then, perhaps not now.

                  Oracle's largest installed base is on Linux, not Solaris. They know what side their bread is buttered on.

                  The problem with the code being released under CDDL is that it may be difficult for Oracle to now change the licensing, depending how any 3rd party copyright assignments were made (the same issue with so many contributors to the linux kernel and GPL)

                  Comment


                  • #19
                    Originally posted by devsk View Post
                    What a load of crap! Use and learn ZFS first, and then test it. Compare the performance of its features with other filesystems.
                    Responses like this are not helping things.

                    There's a very active group of ZFS on linux users and we're all concerned about seeing accurate benchmarking. All the Phoronix guys need to do is ask for some assistance and it'll be freely given.

                    Main hints:

                    0: ZFS is intended for disk arrays. A single drive configuration is not something which would ever be used in production environments (even in a SOHO environment)

                    1: ZFS works best when given entire disks, not partitions.

                    2: Any machine using ZFS needs a fair amount of ram. If deduplication and compression are enabled then it needs a lot more ram than a ext4 system. These are necessary tradeoffs. Rule of thumb: 4Gb + 1Gb per Tb of disk, double that if deduplication is enabled.

                    3: Tune your ashift= value. It works better at 12-13 than the default inherited from Solaris that is tuned for 512 byte sectors and may be better at higher values for larger files.

                    4: SSD l2arc and SSD ZILs make a _large_ performance difference. Don't skimp on the l2arc. Larger L2arcs will help compensate for lack of ram when deduplicating. Even relatively slow SSD will make a difference.

                    5: Hardware raid controllers are a liability. They get in the way. Don't use a hardware RAID array as it kills about half the advantages ZFS can offer (If you must use a hardware raid controller, then set the drives up in JBOD mode or create multiple single-drive RAID0 arrays) about the only thing that _is_ useful is battery backed write cache.

                    6: Make sure you use optimum disk layouts (there are optimum numbers for each level of raidz)

                    ZFS is designed to detect and _fix_ silent disk errors (data corruption which passes ECC. This is estimated to be a daily occurance on terabyte-scale hard drives at quoted correction levels). Hardware raid controllers might detect an error but that's about it.

                    Even in worst case performance scenarios ZFS should be almost bulletproof for all redundant disk layouts. It's designed from the ground up with the notion that storage media is unreliable. I have abused my ZFS filesystems every way I can think of - including pulling out multiple disks from a live filesystem under full r/w load - and the worst thing that's happened is that it shut down - no data was lost when the drives were plugged back in. Sometimes that means performance penalties but a fast filesystem is no use if it loses data (BTRFS and Ext4-on-mdraid6 both died irrepairably for under the same tests).

                    It'd be good to see a redone set of benchmarks for a properly tuned ZFS setup - and bear in mind that ZFS isn't just a "filesystem" - it's a complete storage environment which replaces RAID+Partitions+LVM+filesystem and is intended for multi-Tb scale installations (I run 12Tb at home on 5400RPM drives, and several hundred Tb at work...)
                    Last edited by stoatwblr; 28 June 2012, 10:02 PM.

                    Comment


                    • #20
                      Originally posted by stoatwblr View Post
                      There's a very active group of ZFS on linux users and we're all concerned about seeing accurate benchmarking. All the Phoronix guys need to do is ask for some assistance and it'll be freely given.
                      I maintain the ZFS kernel modules in Gentoo Linux and I am also responsible for some of the patches in the ZFSOnLinux shortlog. If Michael contacted me by email, I would be more than happy to describe how to do proper benchmarks.

                      Comment

                      Working...
                      X