Announcement

Collapse
No announcement yet.

Running ZFS With CAM-based ATA On FreeBSD 8.1

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Running ZFS With CAM-based ATA On FreeBSD 8.1

    Phoronix: Running ZFS With CAM-based ATA On FreeBSD 8.1

    As was mentioned in last Friday's article, Which Is Faster: Debian Linux or FreeBSD, tests of FreeBSD atop the ZFS file-system (rather than UFS2+S) are currently underway and those results are expected to be published in full later this week as the ZFS disk performance is compared directly to UFS2+S, UFS2+J, and also Ubuntu Linux with the EXT4 and Btrfs file-systems. Today though we have a few ZFS performance numbers to share as we look at the performance of the new CAM-ATA sub-system on FreeBSD.

    http://www.phoronix.com/vr.php?view=15148

  • #2
    Very interesting, I wonder what the devs have to say about the results.

    Comment


    • #3
      probably nothing? Remember, freebse devs told everyone and the world that softupdates were better than journaling and 'free'. Without performance impact. Until other showed them:
      no.

      did they retract? did they do something about it? No, just silence.

      Comment


      • #4
        Originally posted by energyman View Post
        probably nothing? Remember, freebse devs told everyone and the world that softupdates were better than journaling and 'free'. Without performance impact. Until other showed them:
        no.
        When has that been shown? Never seen a benchmark between UFS with softupdates and UFS with journalling.

        Comment


        • #5
          there have been tests comparing ufs with softupdates and without. Since there is no journaling for ufs the rest of your posting aswers yourself.

          Comment


          • #6
            Here is the only benchmarking test between UFS journaling and softupdates that I could find quickly. Maybe this is something Phoronix should test? After all, most popular Linux filesystems use journaling, so it would be good to know how well it stands up against softupdates.

            Comment


            • #7
              http://bulk.fefe.de/lk2006/bench.html

              Comment


              • #8
                Originally posted by energyman View Post
                Since there is no journaling for ufs
                There is. It's called gjournal.

                Comment


                • #9
                  oh sorry. And when was it introduced?

                  Comment


                  • #10
                    A request to the Author:

                    I just wanted to point out a difference between GNU/Linux and FreeBSD systems. Whereas Linux refers to just the kernel, FreeBSD refers to a fully functional Operating System of which kernel is just a small component. The Desktop Environment (Gnome, KDE, ... ), Xorg, even bash shell are third party applications that can be either installed as binaries or compiled from source with the excellent Ports infrastructure. Therefore, it would be great if the following sentence is corrected to:

                    --
                    The software stack being tested consists of the FreeBSD 8.1-RELEASE x86_64 base system with KDE 4.4.45, X.Org Server 1.7.5, and GCC 4.2.1.
                    --

                    Please note the change: word "kernel" is replaced with "base system with".

                    Please keep up the good work!

                    Comment


                    • #11
                      Originally posted by energyman View Post
                      when was it introduced?
                      In 2008 with the release of FreeBSD 7.0.

                      Comment


                      • #12
                        Let me comment the results.

                        Sorry for the late response, I have just noticed this thread. I believe some results of this tests set are incorrect. I would like to comment them.

                        First 9 results look fine:
                        1. LZMA Compression - not really an I/O test, that could be seen from almost equal results.
                        2. Gzip compression - same.
                        3. Compile bench - not sure what exactly this test does, but OK. For non-threaded I/O NCQ may give slightly lower performance at the drive firmware level.
                        4. Postmark - 44% benefit under parallel load is normal for CAM ATA because of NCQ.
                        5. Unpacking kernel - unpacking is a single-threaded process with a lot of flushing. Small slowdown reason may be same as in 3.
                        6. Write in 8 threads - CAM with NCQ won a bit, OK
                        7/8. Write in 16/32 threads - increasing number of threads makes pattern more random, that penalizes legacy ATA, while NCQ in CAM probably compensates it.
                        9. Write in 32 threads by 128MB - I can't explain why results slightly better then in 8, but CAM with NCQ still wins.

                        But the rest are not good:
                        10. Random write in 8 threads - for random tests tiobench uses 4K blocks. None of desktop drives (and especially laptop ones) can do more then 200-300 random I/Os. As result, the best what this test should show is about 1MB/s. Instead we can see about 49MB/s in both cases. Explanation is trivial - all data fit into ZFS caches and were written almost sequentially on file close. This is just not a disk subsystem test.
                        11. Random write in 16 threads - due to increased active data set caching works worse. As result we can see lower speeds. Though speeds are still higher then possible, that means caching is still actively used.
                        12. Random write in 4 thread by 128MB - as I have said, 25MB/s with legacy ATA can't be explained by anything except caching. Random write in 4 threads just can't be faster then random write in 16 threads in 11. This result is wrong by definition. Most probably something affected cache hits ratio between tests.
                        13/14 Reading in 16 threads by 64MB and 256MB - the only reason why results of these two tests could be different is because of cache hits.

                        So my conclusion: these tests were not considering cache effects. If it was assumed intentionally - then it is at least not an ATA subsystems, but cache effectiveness comparison. If it happen accidentally - then these results just do not mean anything.

                        Comment


                        • #13
                          Some alternative benchmarks

                          To additionally ground my point here is some of my benchmark results. It was done on i386 9-CURRENT with 2GB RAM. Such memory-limited condition was chosen intentionally to minimize cache effects and really compare disk subsystems.

                          Threaded I/O Tester v0.3.3:
                          Test script:
                          http://people.freebsd.org/~mav/TEST.zfs
                          Legacy ATA results:
                          http://people.freebsd.org/~mav/TEST.ata.zfs
                          CAM ATA results:
                          http://people.freebsd.org/~mav/TEST.cam.zfs
                          Total data file size was set to 2GB to minimize cache effects. CAM ATA shows benefits of 30-50% in most of numbers.

                          RAID-test v1.2:
                          To compare disk subsystems performance unrelated to file systems - here is benchmarks of legacy and CAM ATAs in random read, write and mixed I/O requests of different sizes to raw disk:
                          http://people.freebsd.org/~mav/TEST.raidtest
                          Here you can see almost double speedup on read requests. Write requests do not benefit because it is already covered by enabled drive write cache.

                          Comment


                          • #14
                            Some numbers about UFS

                            I was asked to repeat same tests with UFS, so here they are.

                            First I just run the same benchmarks over UFS. Here are the results:
                            http://people.freebsd.org/~mav/TEST.ata.ufs
                            http://people.freebsd.org/~mav/TEST.cam.ufs
                            CAM ATA won, but it looks like 2GB of RAM is enough for UFS (unlike ZFS, especially on i386) to significantly cache data in this situation. So these results can not really be trusted.

                            So I have repeated tests after removing 1GB of RAM:
                            http://people.freebsd.org/~mav/TEST.ata.ufs.1GB
                            http://people.freebsd.org/~mav/TEST.cam.ufs.1GB
                            CAM ATA won again, but now with reasonable numbers.

                            For completeness I have also repeated test with block size of 16K (default UFS block). It allows UFS to avoid read-modify-write operations on random writes:
                            http://people.freebsd.org/~mav/TEST.ata.ufs.1GB.16k
                            http://people.freebsd.org/~mav/TEST.cam.ufs.1GB.16k
                            Here it can be seen that miltithreaded reading has up to double benefit
                            because of NCQ. Pure write, covered by disk write cache, is almost
                            unaffected, confirming to previous raidtest results.

                            2 Phoronix: In my tests I am always trying to validate and explain every aspect of result. Until you do the same in your reviews, they won't worth much.

                            PS: Note that this system was not really suitable for ZFS, so numbers can be compared only with special care and understanding. I had no goal to compare UFS and ZFS directly. They were made for completely different environments and each have own benefits and requirements.

                            Comment

                            Working...
                            X