Announcement

Collapse
No announcement yet.

EXT3, EXT4, Btrfs Ubuntu Netbook Benchmarks

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #46
    Originally posted by teekee View Post
    ...
    I think you are misunderstanding the intent behind the testing. The SQLITE test is admittedly simple - it does what is says, 12500 sequential inserts, but it has been found to have vast variance between filesystems. This is because it extremely sensitive to the fsync performance. In fact this test as it stands has contributed to a number of changes in a number of different projects.

    If you have a particular interest in seeing another test that focuses on a particular dimension, then by all means, define the test, create the test case. I am sure Michael will have no concerns including it within future comparisons. If your interest is in making SQLITE performance understood, please do so. I have personally asked the SQLITE team to assist, but received no input back.

    Making accusations of skill, awareness or intent is simply not helpful. Taking actions to assist is exteremely helpful.

    Regards,

    Matthew

    Comment


    • #47
      Yes, you are absolutely right about what the test currently does - basically its functionality can be described by the following pseudocode:

      Code:
      for (int i = 0; i < 12500; i++)
      {
        write into file f
        fsync(f);
      }
      But I'm really puzzled why the test is called "SQLite 12500 INSERTs" then and why it is not called "fsync performance test" and why it doesn't use the code described above instead of SQLite.

      And maybe you are right and the intention really is to test the speed of fsync. But then I would expect that this would be mentioned somewhere - so far I have seen about 10 "filesystem" performance tests here and it wasn't mentioned anywhere.

      Your comment is actually very good - the casual visitor can easily misunderstand the purpose of individual tests. Basically every test should have some description about what exactly it tests - right now there is nothing like that (or if I miss something, please point me to the place where it is). Without description the results are just some random numbers without any meaning. (e.g. should I interpret the SQLite test as a benchmark of how fast I can store 12500 entries into the database or as a fsync speed test [the first option is apparently wrong])

      Without proper description of the test, the reaction of kernel developers will most probably look like this:

      http://www.spinics.net/lists/linux-ext4/msg17152.html

      And I absolutely agree with Eric in this respect.

      In fact this test as it stands has contributed to a number of changes in a number of different projects.
      Could you tell me which projects and how it contributed to them? I'm quite curious about it.

      If your interest is in making SQLITE performance understood, please do so. I have personally asked the SQLITE team to assist, but received no input back.
      I don't quite understand what you mean here - the current output of the benchmark is absolutely expected. Unless you use PRAGMA synchronous=OFF during sqlite initialisation or enclose all your inserts in a single transaction, you won't get good performance. I can confidentially say that as a developer who uses sqlite.

      (OK, one interesting thing is that btrfs is so slow here - I would expect just the opposite [at least for larger amounts of data] since btrfs has to flush less data than ext3 during fsync)

      Making accusations of skill, awareness or intent is simply not helpful. Taking actions to assist is exteremely helpful.
      Right, if it was the kind of "I am smart, you are stupid" accusation, that would be absolutely useless. Rather my point was to point out the major problem of the benchmarks that appear here. Yes, I admit that I have chosen more provocative way, but from my experience, this is sometimes better because otherwise your message just gets unnoticed others. I'm pretty convinced about Michael's:

      1. skill - without his skill there wouldn't be this site, phoronix test suite and other things

      2. awareness - in the news section you can read many interesting things and I can see that Michael spends a lot of time finding them and watches broad coverage of topics

      3. intent - I'm absolutely sure about Michaels good intents (the only moment when I had doubts about it was when I thought that my post got censored - but these were just technical issues in fact)

      But enough of positive points, these are boring ;-). The problem is in the depth of knowledge one must have to create a good benchmark. You really have to understand deeply what you are testing - otherwise the output will be mostly garbage. I don't want to pretend I'm expert enough to produce a good benchmark - I'm simply not. Concerning this topic, however, you might find the following thread at ext4 mailing list interesting:

      http://www.spinics.net/lists/linux-ext4/msg16866.html

      Both Ted Ts'o (the main ext4 developer) and Chris Mason (the main btrfs developer) like the benchmarks that can be found at:

      http://btrfs.boxacle.net/

      See these posts regarding this:

      http://www.spinics.net/lists/linux-ext4/msg16871.html

      http://www.spinics.net/lists/linux-ext4/msg16963.html

      They would like to see file system performance history comparison, but the author of these benchmarks, Steven Pratt, doesn't seem to have time to process the results from these benchmarks:

      http://www.spinics.net/lists/linux-ext4/msg16974.html

      Now, pts is ideal for this, isn't it? How about asking Steve for the set of benchmarks he uses (+ the methodology he uses for the data analysis)? Of course, Michael doesn't have the same server machine so he won't be able to run the RAID tests, but it should be possible for the single disk tests. And I think that even kernel developers would be happy to see the results of the benchmarks.

      So, one more long post - believe me, I wouldn't write anything like this if I wanted to "just complain". I'm hoping for change - and the above proposal seems to be a good start, don't you think? Seriously, I think none of us reading phoronix forums is expert enough to make good benchmarks - let's leave this task to professionals...

      Kind regards,

      Jiri

      Comment

      Working...
      X