Announcement

Collapse
No announcement yet.

File-System Benchmarks On The Intel X25-E SSD

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    a) did you turn on barriers for ext3? Or did you turn OFF barriers for xfs, reiserfs, ext4?
    b) did you know that the intel ssd becomes SLOWER after a certain amount of i/o a day?
    c) if yes, which was the frist fs tested? which one last?

    Comment


    • #17
      Originally posted by GreatWalrus View Post
      Edit: is "btrfs filesystem" redundant? lol
      Btrfs is going to have waaaay to many features to be fast, Tux3 will hopefully be fast

      Comment


      • #18
        Originally posted by Linuxhippy View Post
        Why are those encoding tests used again and again to benchmark file-system performance? I mean all of the do basically linear reading/writing, and not only in this test the results never differ more than 10% -> useless.

        What I would care more about is moving/creating/deleting files, system bootup time, kernel unpack&compile, find over a whole system ... something which really stresses a FS and the IO subsystem.

        I also don't run aalib stuff in my terminal to benchmark my graphic hardware

        - Clemens
        I did that - reiser4 won. ext3 is slow. xfs is slow as soon as you have to deal with lots of small files.

        Others did too - look at this one:

        http://bulk.fefe.de/lk2006/

        Comment


        • #19
          CPU usage

          I thought CPU usage is quite important to be benchmarked, since it corresponds to power consumption, which is a big concern from laptop users.

          I should not be the only one who were looking forward to seeing CPU usage test when next time brtfs is involved.

          Comment


          • #20
            Originally posted by energyman View Post
            a) did you turn on barriers for ext3? Or did you turn OFF barriers for xfs, reiserfs, ext4?
            b) did you know that the intel ssd becomes SLOWER after a certain amount of i/o a day?
            c) if yes, which was the frist fs tested? which one last?
            Well based on how results are combined with PTS and Phoronix Global, I assume the order was ReiserFS, JFS, XFS, EXT3, and EXT4.

            With Phoronix Test Suite, a benchmark is done on ReiserFS (a file is created with the results), a benchmark is done on JFS (the ReiserFS file is appended to add JFS results), a benchmark is done on XFS (original file appended to add XFS after ReiserFS and JFS), and so on.

            Here's an example of a test that I did last night, in which I added my ram results on my Inspiron 1501 to results of two other PC's: http://global.phoronix-test-suite.co...53-16357-24910

            You could also add your ram benchmark to this by running
            Code:
            phoronix-test-suite benchmark brian-22653-16357-24910
            Also, (a way to eliminate the IO issue you claim) Phoronix could have used a host system with many partitions set up on the Intel SSD, one for each file system. Then they wouldn't have to reboot (and install) to test the next file system. This would also mean that they could have ran each test for each file system before moving to the next test, eliminating the disadvantage of one file system experiencing slower speeds due to IO loss (instead, all file systems would experience slower speeds for the next test).

            Comment


            • #21
              that still doesn't answer which options were used to mount the fs. ext3 cheats (speed is more important than data safety) and IMHO after the 'lost kde/gnome/everything in /etc' desaster nobody should use ext4. Ever.

              There are fs that care about your data (reiserfs, reiser4, ext3 with the right mount options) and fs that don't (ext4).

              Comment


              • #22
                Originally posted by energyman View Post
                that still doesn't answer which options were used to mount the fs.
                Well, they most likely used the defaults here. Isn't that what matters, the defaults, which the system sets up for the user? Phoronix wasn't trying to find which options would create different effects on file system performance. As far as I know, they were taking the defaults and testing those. The performance with the default settings is most likely what the average person is looking for with such a wide array of file system benchmarks.

                Comment


                • #23
                  yeah, you see - that is the problem - ext3 was tuned to look good in benchmarks with 'the defaults'. But 'the defaults' are shit.

                  Comment


                  • #24
                    Originally posted by energyman View Post
                    that still doesn't answer which options were used to mount the fs. ext3 cheats (speed is more important than data safety) and IMHO after the 'lost kde/gnome/everything in /etc' desaster nobody should use ext4. Ever.

                    There are fs that care about your data (reiserfs, reiser4, ext3 with the right mount options) and fs that don't (ext4).
                    What are the right options for ext3 and the others?

                    Comment


                    • #25
                      Regarding Reiser4: There are patchsets for all recent kernels available on kernel.org: http://www.kernel.org/pub/linux/kern...dward/reiser4/

                      It would be great if it was included in these tests.

                      Comment


                      • #26
                        Originally posted by energyman View Post
                        that still doesn't answer which options were used to mount the fs. ext3 cheats (speed is more important than data safety) and IMHO after the 'lost kde/gnome/everything in /etc' desaster nobody should use ext4. Ever.

                        There are fs that care about your data (reiserfs, reiser4, ext3 with the right mount options) and fs that don't (ext4).
                        Everything was left at their Ubuntu defaults.
                        Michael Larabel
                        http://www.michaellarabel.com/

                        Comment


                        • #27
                          as I am afraid. Yeah, then you can add 30% to the times of ext3 (if there devs is to believe).

                          try barrier=1 for ext3. For xfs and reiserfs the option is not needed. jfs does not support barriers.

                          Comment


                          • #28
                            noatime,nodiratime (or only noatime) should be the bare minimum and should speed things noticably up

                            unfortunately delete performance of reiser4 sucks somewhat but that's the price to pay for a filesystem that is the best in all the other areas

                            Comment


                            • #29
                              ANOTHER test with encryption/compression?

                              You even mention in the testing that it's CPU-bound, and not drive-bound... So why do them?

                              There's no details on how the different filesystems were created - which someone else has mentioned so far.
                              Due to the wear-levelling I'd have thought it would be sensible to totally blank the drives after each test to get true comparitive results (this is basically a hardware format of the drive - not simply an fdisk operation).

                              Personally, I'd have preferred different systems set up - but that would require 5 (or so) X-25's and Intel laptop's. Obviously silly (but also the only way to get TRUE comparisons).

                              Maybe run the first Filesystem, wipe, do each test and then re-test the first filesystem (to see if any impact has been done).

                              Again, no true read/write timing was performed (only **simulated** DATA reading/writing).
                              Again, only single read/write actions at the same time.

                              .. I'm starting to loose faith..

                              Comment


                              • #30
                                The MySQL performance blog has also done performance testing on the X25-E.

                                They came to the conclusion that write cache enabled and write barriers disabled leads to lost transactions (unsurprisingly).

                                With write barriers enabled the performance turned out to be very poor.

                                Comment

                                Working...
                                X