Announcement

Collapse
No announcement yet.

Phoronix Benchmarking recommendations

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Phoronix Benchmarking recommendations

    I started this thread a while back asking the same question, but unfortunately no Phoronix member made it clear if they were interested in pursuing such tests: http://www.phoronix.com/forums/showthread.php?t=945

    I am still interested (as well as many other users, I imagine) in a series of controlled file system benchmarks. Possible candidates are: ZFS, XFS, JFS, NTFS, Reiser3, Reiser4, Ext3, Ext4, Btrfs, and VMFS. I would also like to see EncFS, Loop-AES, and Truecrypt included in all tests possible.

    Please speak up if there are any other worthy candidates for inclusion.

    Very few file system benchmarks are scientific. We should be taking everything into account - not just making a bunch of partitions on the same hard drive, for example, because as we know, seek times vary from the center of the disk platter to the edge.

    Every file system listed cannot be used in apple to apple configurations. Therefore, the most likely configuration - the one emulating real world configurations - should be used. We could test using RAIDs, UPSs (for crash recovery), and using hardware that may be advantageous to one file system or another. With all the potential these file systems give us, why not figure out how each one actually performs and compares in realistic circumstances.

    1) performance, read, write, etc.
    2) stability, load testing, etc.
    3) feature tests - resizing, what works, etc.
    4) organization and specifications of test criteria.

    We should all provide input in order to create a good controlled study. We should be using standards (like the vanilla Kernel) in easy to reproduce configurations.

    I hope one of you Phoronix officials respond. It seems you guys have the hardware and resources to pull something like this off. I'd be happy to help and run all the tests I can.
    Last edited by petabyte; 25 July 2008, 10:47 PM.

  • #2
    Oh, never mind. I forgot that these forums are dead.

    Comment


    • #3
      petabyte, I have been preoccupied all week with OSCON and traveling. Sure I am interested in running some of those tests, but right now lack the time with a backlog of about a dozen other hardware products I am currently in the midst of testing. Once that clears up in a few weeks, I can probably look at it then.

      Michael
      Michael Larabel
      https://www.michaellarabel.com/

      Comment


      • #4
        I am in no hurry at all. I am glad you were able to reply. So if you say you will be interested in a bit, then just post back here then. Thanks.

        Comment


        • #5
          Originally posted by petabyte View Post
          I am still interested (as well as many other users, I imagine) in a series of controlled file system benchmarks. Possible candidates are: ZFS, XFS, JFS, NTFS, Reiser3, Reiser4, Ext3, Ext4, and VMFS. I would also like to see EncFS, Loop-AES, and Truecrypt included in all tests possible.

          Please speak up if there are any other worthy candidates for inclusion.
          Don't forget btrfs.

          Comment


          • #6
            Ok, I'll keep editing the OP to compile any additions.

            Comment


            • #7
              Hey, I've got an interest similar to the OP, and I wonder whether can do this:

              Firstly, the reason that I'm interested is that I've got bored with the sterile
              'This is way faster than that'
              'Oh no it isn't, that is way faster than this'
              discussions/flame fests and I'd like to get some definitive data, at least for test cases that interest me.

              In particular, what I'm interested in is a few filesystems (the journalling ones, plus, maybe ext2 as a reference), the different ways of setting them up (noatime and anything else that can affect performance), platter mapping and partition layouts on a couple of sets of hardware.

              (I've got a feeling that one of the reasons that you occasionally see such strange assertions about filesystems is that, half the time, the people doing the testing didn't know enough to ensure a level playing field, in as much as a level playing field can exist. But I'll have a better idea when I've done some testing.)

              So, what I'm interested in getting is some indication of application-level performance (and not, for example, "synthetics" like seek time or interface bandwidth which really don't directly relate to what you see "sat at the console"). Is this something that's reasonably easy to do with your test suite?

              Comment


              • #8
                Originally posted by Chicken Fried View Post
                (I've got a feeling that one of the reasons that you occasionally see such strange assertions about filesystems is that, half the time, the people doing the testing didn't know enough to ensure a level playing field, in as much as a level playing field can exist. But I'll have a better idea when I've done some testing.)
                Very true. I think most people really exaggerate the performance difference between the newest and older file system for 95% of testing criteria. I do believe, however, that new testing features should be a forefront goal in benchmarks - these new file systems, that are of course not much different in read and write speeds, can do things we've never been able to (easily) do before.

                Comment


                • #9
                  Forgot to mention Fefe's more recent benchmarks:


                  I had mistakenly said that no substancial benchmarking on these file systems had been done since 2003: http://bulk.fefe.de/scalability/

                  Hopefully, I will get some time to benchmark soon.

                  Comment


                  • #10
                    Originally posted by petabyte View Post
                    I am still interested (as well as many other users, I imagine) in a series of controlled file system benchmarks. Possible candidates are: ZFS, XFS, JFS, NTFS, Reiser3, Reiser4, Ext3, Ext4, Btrfs, and VMFS. I would also like to see EncFS, Loop-AES, and Truecrypt included in all tests possible.
                    LINUX FILESYSTEM BENCHMARKS
                    (includes Reiser4 and Ext4)




                    How to Make a Website with free web hosting services & cheap web hosting for ecommerce & small business hosting. Create & Make a Free Website with Affordable web hosting provider free website promotion tools & web stats. Free Website Builder, Templates, & Best Free Web Hosting. How to Create a Website


                    Some Amazing Filesystem Benchmarks. Which Filesystem is Best?


                    RESULT: With compression, REISER4, absolutely SMASHED the other filesystems.

                    No other filesystem came close (not even remotely close).

                    Using REISER4 (gzip), rather than EXT2/3/4, saves you a truly amazing 816 - 213 = 603 MB (a 74% saving in disk space), and this, with little, or no, loss of performance when storing 655 MB of raw data. In fact, substantial performance increases were achieved in the bonnie++ benchmarks.

                    We use the following filesystems:

                    REISER4 gzip: Reiser4 using transparent gzip compression.
                    REISER4 lzo: Reiser4 using transparent lzo compression.
                    REISER4 Standard Reiser4 (with extents)
                    EXT4 default Standard ext4.
                    EXT4 extents ext4 with extents.
                    NTFS3g Szabolcs Szakacsits' NTFS user-space driver.
                    NTFS NTFS with Windows XP driver.

                    Disk Usage in megabytes. Time in seconds. SMALLER is better.

                    Code:
                    .-------------------------------------------------.
                    |File         |Disk |Copy |Copy |Tar  |Unzip| Del |
                    |System       |Usage|655MB|655MB|Gzip |UnTar| 2.5 |
                    |Type         | (MB)| (1) | (2) |655MB|655MB| Gig |
                    .-------------------------------------------------.
                    |REISER4 gzip | 213 | 148 |  68 |  83 |  48 |  70 |
                    |REISER4 lzo  | 278 | 138 |  56 |  80 |  34 |  84 |
                    |REISER4 tails| 673 | 148 |  63 |  78 |  33 |  65 |
                    |REISER4      | 692 | 148 |  55 |  67 |  25 |  56 |
                    |NTFS3g       | 772 |1333 |1426 | 585 | 767 | 194 |
                    |NTFS         | 779 | 781 | 173 |   X |   X |   X |
                    |REISER3      | 793 | 184 |  98 |  85 |  63 |  22 |
                    |XFS          | 799 | 220 | 173 | 119 |  90 | 106 |
                    |JFS          | 806 | 228 | 202 |  95 |  97 | 127 |
                    |EXT4 extents | 806 | 162 |  55 |  69 |  36 |  32 |
                    |EXT4 default | 816 | 174 |  70 |  74 |  42 |  50 |
                    |EXT3         | 816 | 182 |  74 |  73 |  43 |  51 |
                    |EXT2         | 816 | 201 |  82 |  73 |  39 |  67 |
                    |FAT32        | 988 | 253 | 158 | 118 |  81 |  95 |
                    .-------------------------------------------------.
                    WHAT THE NUMBERS MEAN:

                    The raw data (without filesystem meta-data, block alignment wastage, etc) was 655MB.
                    It comprised 3 different copies of the Linux kernel sources.

                    Disk Usage: The amount of disk used to store the data.
                    Copy 655MB (1): Time taken to copy the data over a partition boundary.
                    Copy 655MB (2): Time taken to copy the data within a partition.
                    Tar Gzip 655MB: Time taken to Tar and Gzip the data.
                    Unzip UnTar 655MB: Time taken to UnGzip and UnTar the data.
                    Del 2.5 Gig: Time taken to Delete everything just written (about 2.5 Gig).

                    Each test was preformed 5 times and the average value recorded.

                    To get a feel for the performance increases that can be achieved by using compression, we look at the total time (in seconds) to run the test:

                    bonnie++ -n128:128k:0 (bonnie++ is Version 1.93c)

                    Code:
                    .-------------------.
                    | FILESYSTEM | TIME |
                    .-------------------.
                    |REISER4 lzo |  1938|
                    |REISER4 gzip|  2295|
                    |REISER4     |  3462|
                    |EXT4        |  4408|
                    |EXT2        |  4092|
                    |JFS         |  4225|
                    |EXT3        |  4421|
                    |XFS         |  4625|
                    |REISER3     |  6178|
                    |FAT32       | 12342|
                    |NTFS-3g     |>10414|
                    .-------------------.
                    The top two results use Reiser4 with compression. Since bonnie++ writes test files which are almost all zeros, compression speeds things up dramatically. That this is not the case in real world examples can be seen in the first test above where compression often does not speed things up. However, more importantly, it does not slow things down much, either.


                    How to Make a Website with free web hosting services & cheap web hosting for ecommerce & small business hosting. Create & Make a Free Website with Affordable web hosting provider free website promotion tools & web stats. Free Website Builder, Templates, & Best Free Web Hosting. How to Create a Website

                    Comment

                    Working...
                    X