Announcement

Collapse
No announcement yet.

The Performance Of EXT4 Then & Now

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • ssam
    replied
    read test

    I am confused as to why the read rates suffered. I would assume that data safety is something that only matters in the writing of files.

    also maybe it is worth comparing the no journalling mode. this was a google contribution to ext4.

    Leave a comment:


  • kingN0thing
    replied
    Originally posted by jackflap View Post
    But then, is there any real expectation that your average non-guru developer will want to think about things like that?
    Alex
    Nope, don't believe most developers being capable of doing that.. but users can decide that they do not have any useful data and enable the faster (but less secure) behaviour with -o nobarriers. So I don't get the constant fuzz about that change.

    Leave a comment:


  • jackflap
    replied
    Originally posted by kingN0thing View Post
    Are you really complaining that kernel developers have chosen safe over fast defaults?

    I dunno, but I <3 my data and would rather prefer that I can access it even if some unlucky power-out happened on my laptop.

    If you don't mind that, change the mount option.. that's what it's here for. But the defaults are sane, you can't expect a newbie user to manually alter that kind of thing.
    I don?t think the discussion at hand is solely regarding whether or not the safety mechanisms should be put in place.

    I'm pretty certain I read that the original problems with data loss were due to application developers not writing to files properly. Something about depending on the kernel to automatically fsync instead of doing it themselves, but I could be waay off :P

    Anyway, I think there's a chance that if most programmers out there had written their apps while catering for optimizing how their apps wrote to files, then we could have maintained the significant speeds of the original benchmarks.

    But then, is there any real expectation that your average non-guru developer will want to think about things like that?

    Alex

    Leave a comment:


  • deanjo
    replied
    Originally posted by kingN0thing View Post
    I wouldn't call every non-Atom system 'cutting edge'. And it's not about the speed, it's about the cpu Architecture (and I would think the core architecture more in use than the Atom one) and the ratio between CPU/IO. So a huge delta on a nettop might just not exist on a two-year old desktop PC (the barrier changes will still impose their performance tradeoff, but hey.. if you prefer fast over secure just keep your data in a tmpfs and suspend instead of reboot).

    This usecase might be just not representative for desktop computers. Fine for me, but then the OP should title this benchmark in an other way. Well as long as people think about the usability of this benchmark for their file system choosing situation my point has been made.
    Well given the way intel has been trying to fight Atoms cutting into their higher end and more profitable sales I would say that the adoption is significant enough to use it as a base comparison especially when you look at what are being promoted as "home servers" which are usually atom based units nowdays.

    Leave a comment:


  • kingN0thing
    replied
    Originally posted by deanjo View Post
    CPU or IO bound it's still a valid comparison. Ext 4 has become default on most mainstream distros now. If there is ANY delta difference then the tests are clearly valid as no matter of the reason it is still something that the end user will experience. If everybody ran cutting edge systems then you might have a point calling it unfair on running it on a lower end system but this simply is not the case in the real world.
    I wouldn't call every non-Atom system 'cutting edge'. And it's not about the speed, it's about the cpu Architecture (and I would think the core architecture more in use than the Atom one) and the ratio between CPU/IO. So a huge delta on a nettop might just not exist on a two-year old desktop PC (the barrier changes will still impose their performance tradeoff, but hey.. if you prefer fast over secure just keep your data in a tmpfs and suspend instead of reboot).

    This usecase might be just not representative for desktop computers. Fine for me, but then the OP should title this benchmark in an other way. Well as long as people think about the usability of this benchmark for their file system choosing situation my point has been made.

    Leave a comment:


  • deanjo
    replied
    Originally posted by kingN0thing View Post
    Then call it a performance for Linux Netbook filesystems.. if it is still CPU bound you cannot use this test for a general statement about ext4 performance (ie. what users would expect under a title of "The Performance of EXT4 Then & Now").
    CPU or IO bound it's still a valid comparison. Ext 4 has become default on most mainstream distros now. If there is ANY delta difference then the tests are clearly valid as no matter of the reason it is still something that the end user will experience. If everybody ran cutting edge systems then you might have a point calling it unfair on running it on a lower end system but this simply is not the case in the real world.

    Leave a comment:


  • kingN0thing
    replied
    Originally posted by garytr24 View Post
    These benchmarks should be designed to be I/O bound, right?
    maybe I was unclear.

    Ext2/3 hat the behaviour that it's performance increase was mostly IO bound, reiserfs was mostly CPU-bound.

    That means:

    base system, both have a performance of 100.

    base system with better CPU:
    reiser-fs gets 110% I/O performance.
    ext3 gets 130% I/O performance.

    base system with better (faster) hard drive:
    reiser-fs gets 130% I/O performance
    ext3 gets 110% I/O performance.

    So you might have better I/O performance when using a desktop-level CPU (instead of a netbook CPU). The reason about this are the different in memory data structures (as well as disk format) and access paths within the different file systems.

    EDIT: so if you want to know the filesystem performance you should try to match the CPU speed (and cpu count) to the intended system. This makes those kinda-netbook benchmarks not very useful.
    Last edited by kingN0thing; 01-19-2010, 09:53 AM.

    Leave a comment:


  • garytr24
    replied
    Originally posted by kingN0thing View Post
    another thing that I remember from earlier linux days:

    ext2/3 were mostly CPU-bound which means you can increase performance vastly by adding more CPU power. Other file-systems (ie. ReiserFS) are IO-bound which means that you can vastly improve performance by adding a faster disk.

    The test platform (Atom 330) might thus be inherently 'unfair' for ext2/3/4. And do not forget that advantages in CPU speed are a magnitude higher than advantages in storage technology.
    I think atom 330 is plenty fast enough for this. I have one as a server, and I can get over 40MB/s through samba file transfers with one of the cores pegged at 100%. But it's dual core with hyperthreading. These benchmarks should be designed to be I/O bound, right?

    Leave a comment:


  • kingN0thing
    replied
    Originally posted by deanjo View Post
    With the number of netbooks/nettops being sold out there I wouldn't call it 'unfair'. Especially since small, weak systems are often used as a "poster child" for linux.
    Then call it a performance for Linux Netbook filesystems.. if it is still CPU bound you cannot use this test for a general statement about ext4 performance (ie. what users would expect under a title of "The Performance of EXT4 Then & Now").

    Leave a comment:


  • deanjo
    replied
    Originally posted by kingN0thing View Post
    The test platform (Atom 330) might thus be inherently 'unfair' for ext2/3/4. And do not forget that advantages in CPU speed are a magnitude higher than advantages in storage technology.
    With the number of netbooks/nettops being sold out there I wouldn't call it 'unfair'. Especially since small, weak systems are often used as a "poster child" for linux.

    As a side note, "drastic regressions" like these did not occur in openSUSE where ext3 was using barriers enabled by default. AFIK it was the only distro doing so and probably would show a better comparison.
    Last edited by deanjo; 01-19-2010, 09:45 AM.

    Leave a comment:

Working...
X