Announcement

Collapse
No announcement yet.

The Performance Of EXT4 Then & Now

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by kingN0thing View Post
    I wouldn't call every non-Atom system 'cutting edge'. And it's not about the speed, it's about the cpu Architecture (and I would think the core architecture more in use than the Atom one) and the ratio between CPU/IO. So a huge delta on a nettop might just not exist on a two-year old desktop PC (the barrier changes will still impose their performance tradeoff, but hey.. if you prefer fast over secure just keep your data in a tmpfs and suspend instead of reboot).

    This usecase might be just not representative for desktop computers. Fine for me, but then the OP should title this benchmark in an other way. Well as long as people think about the usability of this benchmark for their file system choosing situation my point has been made.
    Well given the way intel has been trying to fight Atoms cutting into their higher end and more profitable sales I would say that the adoption is significant enough to use it as a base comparison especially when you look at what are being promoted as "home servers" which are usually atom based units nowdays.

    Comment


    • #12
      Originally posted by kingN0thing View Post
      Are you really complaining that kernel developers have chosen safe over fast defaults?

      I dunno, but I <3 my data and would rather prefer that I can access it even if some unlucky power-out happened on my laptop.

      If you don't mind that, change the mount option.. that's what it's here for. But the defaults are sane, you can't expect a newbie user to manually alter that kind of thing.
      I donít think the discussion at hand is solely regarding whether or not the safety mechanisms should be put in place.

      I'm pretty certain I read that the original problems with data loss were due to application developers not writing to files properly. Something about depending on the kernel to automatically fsync instead of doing it themselves, but I could be waay off :P

      Anyway, I think there's a chance that if most programmers out there had written their apps while catering for optimizing how their apps wrote to files, then we could have maintained the significant speeds of the original benchmarks.

      But then, is there any real expectation that your average non-guru developer will want to think about things like that?

      Alex

      Comment


      • #13
        Originally posted by jackflap View Post
        But then, is there any real expectation that your average non-guru developer will want to think about things like that?
        Alex
        Nope, don't believe most developers being capable of doing that.. but users can decide that they do not have any useful data and enable the faster (but less secure) behaviour with -o nobarriers. So I don't get the constant fuzz about that change.

        Comment


        • #14
          read test

          I am confused as to why the read rates suffered. I would assume that data safety is something that only matters in the writing of files.

          also maybe it is worth comparing the no journalling mode. this was a google contribution to ext4.

          Comment


          • #15
            Originally posted by ssam View Post
            I am confused as to why the read rates suffered. I would assume that data safety is something that only matters in the writing of files.
            My thoughts exactly. Why did read times suffer so much?

            Comment


            • #16
              Originally posted by ssam View Post
              I am confused as to why the read rates suffered. I would assume that data safety is something that only matters in the writing of files.
              Yes! That was by far the most interesting. The IOZone 2GB read test shows a 50% drop in performance starting with 2.6.31. That is the one I would like to see an explanation for.

              Is it an artifact of an imperfect test? Is it real? If real, what caused it?

              http://www.phoronix.com/scan.php?pag...then_now&num=2

              Comment


              • #17
                Originally posted by jackflap View Post
                I'm pretty certain I read that the original problems with data loss were due to application developers not writing to files properly. Something about depending on the kernel to automatically fsync instead of doing it themselves, but I could be waay off :P
                The problem was not so much that data when missing when you didn't fsync(), it was that you could write to a file, rename it on top of an old file, and then after a reboot discover that your file had been truncated to zero bytes rather than being either the old file or the new file. Given that's been the normal mechanism for anyone needing to perform an atomic update since the Stone Age, for any modern file system _not_ to handle such behaviour cleanly is insane.

                As for fsync(), it's all very well to say you have a wonderfully fast file system because you don't write data out to disk unless you have to, but if that then requires every application to call fsync() any time it writes to the disk in order to ensure that the data will actually be there after a reboot then all your performance gains have just been thrown away.

                Worse than that, fsync() on ext3 with default configuration is slow and unneccesary in most distributions, so suddenly applications have to look at the file system of the computer they're working on in order to determine whether or not they should be calling fsync() all the time; that's mad.

                Lastly, of course, the odds of getting more than a small fraction of application developers to implement fsyncs() properly throughout their code is minute (e.g. even if they're syncing the data files, will they also remember to sync the directory when that's required, and do so in the correct order to ensure that the file contains the correct data?), so why force changes to millions of lines of code when you can just do it once in the file system?

                Comment


                • #18
                  Interesting, but...

                  Interesting test results, makes me rethink of my thought to switch to ext4 on my Ubuntu 9.10 desktop from ext3.

                  However, what I would like to see is a side by side results for all ext2, ext3 and ext4. I guess many people would like to ask for other fs's as well to be added to this side by side test.

                  Comment


                  • #19
                    Originally posted by jpalko View Post
                    Interesting test results, makes me rethink of my thought to switch to ext4 on my Ubuntu 9.10 desktop from ext3.

                    However, what I would like to see is a side by side results for all ext2, ext3 and ext4. I guess many people would like to ask for other fs's as well to be added to this side by side test.
                    Well the biggest performance hit is barriers. If you wish to have close to Ubuntu's Ext3's performance with EXT4 just mount the filesystem with nobarriers as mentioned before because pretty much every linux distro out there with the exception of openSUSE defaulted to not using barriers with EXT3. With EXT4 the default is to mount with barriers.

                    Comment


                    • #20
                      Originally posted by jpalko View Post
                      However, what I would like to see is a side by side results for all ext2, ext3 and ext4. I guess many people would like to ask for other fs's as well to be added to this side by side test.
                      They have already done some of that, although it would be nice to see it updated now with the latest 2.6.33 release candidate. They probably should have linked to these in the intro:

                      http://www.phoronix.com/scan.php?pag...s_nilfs2&num=1

                      http://www.phoronix.com/scan.php?pag..._2632_fs&num=1

                      Comment

                      Working...
                      X