Page 5 of 5 FirstFirst ... 345
Results 41 to 47 of 47

Thread: EXT3, EXT4, Btrfs Ubuntu Netbook Benchmarks

  1. #41
    Join Date
    May 2009
    Posts
    6

    Default Re: Phoronix benchmark criticism

    Michel,

    I must say that I'm rather disappointed by your attitude to constructive criticism. Really, censoring out my post without any (even private) reply is really hard to understand. Despite the problems I described I thought that your aim was to contribute with something useful for the Linux community - I'm not convinced about that at all now. I'm afraid that without change of your attitude the developer part of the community cannot take your work seriously. That's a pity.

  2. #42
    Join Date
    May 2009
    Posts
    6

    Default

    Ahh, interesting, this time my post doesn't wait for approval - so this is a benchmark after how long my post gets deleted - reposting my original message sent 4 days ago:

    Michael,

    I'm really fascinated by you "benchmarks" - in a negative way though I must admit. For instance your "infamous" sqlite insert test - what you do is a complete nonsense. Without looking at the benchmark code, I can confidentaly say that you do inserts without enclosing them in a transaction. The result is that what you benchmark is "really bad programmer's code" performance rather than something that any reasonable program would do. Of course you _would_ know this if you knew what you are doing (which is the biggest problem of your benchmarks that just show some random charts without any context). For instance, this is from SQLite FAQ:


    (19) INSERT is really slow - I can only do few dozen INSERTs per second

    Actually, SQLite will easily do 50,000 or more INSERT statements per second on an average desktop computer. But it will only do a few dozen transactions per second. Transaction speed is limited by the rotational speed of your disk drive. A transaction normally requires two complete rotations of the disk platter, which on a 7200RPM disk drive limits you to about 60 transactions per second.

    Transaction speed is limited by disk drive speed because (by default) SQLite actually waits until the data really is safely stored on the disk surface before the transaction is complete. That way, if you suddenly lose power or if your OS crashes, your data is still safe. For details, read about atomic commit in SQLite..

    By default, each INSERT statement is its own transaction. But if you surround multiple INSERT statements with BEGIN...COMMIT then all the inserts are grouped into a single transaction. The time needed to commit the transaction is amortized over all the enclosed insert statements and so the time per insert statement is greatly reduced.

    Another option is to run PRAGMA synchronous=OFF. This command will cause SQLite to not wait on data to reach the disk surface, which will make write operations appear to be much faster. But if you lose power in the middle of a transaction, your database file might go corrupt.
    Now look at the results of your insert benchmark - does it look familiar? I bet it does - few dozen inserts instead of tens of thousands of inserts. Is your benchmark useful? Not at all - nobody will be so stupid to do such a thing in his program. (What your benchmark does for rotational hard drives is that it measures the number of rotates per second).

    One more example would be your article "The Performance Of EXT4 Then & Now" which was supposed to demonstrate how the performance of ext4 evolved. Well, you did _not_ measure that at all. The fact that some disk benchmark is slower under kernel A than kernel B doesn't mean that it's because of ext4 - there have been quite dramatic changes in IO scheduler towards lower latency, which I think is the main cause of the performance changes. Also the per BDI flusher threads had definitely significant impact on IO performance. See 1.1 and 1.5 here

    http://kernelnewbies.org/Linux_2_6_32

    ext4 is relatively stable and there are not many changes these days that would influence performance dramatically. However, 2.6.33 contains a large number of CFQ changes, which I'm quite sure contribute to performance differences in benchmarks much more than the modifications of filesystems.

    If you want to get filesystem performance history, you should take several filesystems (ext3, ext4, XFS, reiser) and measure their performance for all the kernel version. If all of them have some performance in kernel version A and lower performance in kernel version B, the reason will most probably be that something else than the filesystem has changed that influenced the performance (e.g. IO scheduler) [Of course, some IO scheduler change can influence one filesystem more than the other filesystem so this is not very exact either]. Now concerning the performance drop between 2.6.30 and 2.6.31 - are you sure that Ubuntu didn't start using CFQ scheduler then instead of anticipatory IO scheduler? (I really don't know - just a wild guess, probably totally wrong. What I want to say is that when you perform a benchmark, you do it for the whole kernel - and there are pretty many things that might influence the result.)

    I'm really sorry if I sound too offensive but I just can't understand your attitude to the benchmarking. Less is more sometimes. Instead of having tens of more or less garbage benchmarks a few benchmarks where you know what you are doing and where you can interpret the results somehow is _much_ more useful. It's a real pity - I can imagine you spend a lot of time by preparing everything for your web page but the result is something that nobody can take seriously. How about re-prioritising your work and start learning what you are doing? ;-)

  3. #43
    Join Date
    May 2009
    Posts
    6

    Default

    Aha, got it - when the post is short enough, is doesn't wait for moderator's approval. Good.

    For those who are interested, I was pointing at the SQLite "benchmark" that benchmarks just a wrong use of SQLite - see

    http://www.sqlite.org/faq.html

    point 19 - there should be tens of thousands inserts per second when using a single transaction - no reasonable program will use SQLite in the way as Michael does. Another point was that in his previous test of "The Performance Of EXT4 Then & Now" didn't test just ext4, but the whole kernel - and there were quite some changes in the IO scheduler recently, so I seriously doubt that the performance changes can be attributed to ext4 only and Michael conclusions are plain wrong. And finally I pointed out that Michael should understand to what he is doing, which I guess lead to erasing my post...

  4. #44

    Default

    Quote Originally Posted by teekee View Post
    Michel,

    I must say that I'm rather disappointed by your attitude to constructive criticism. Really, censoring out my post without any (even private) reply is really hard to understand. Despite the problems I described I thought that your aim was to contribute with something useful for the Linux community - I'm not convinced about that at all now. I'm afraid that without change of your attitude the developer part of the community cannot take your work seriously. That's a pity.
    Censoring? There's no censoring. There's spam filters. You have a post count of just two and when your posts contain links they go into a moderation queue until cleared. Your posts should now be live.

  5. #45
    Join Date
    May 2009
    Posts
    6

    Default

    OK, then sorry for what I've written. Just to clarify, I wrote the original post (the long one) 4 days ago and it just disappeared.

    But as you say, it was probably just a too active spam filter. Once again sorry for accusing you of censoring - I just incorrectly thought that critical voices aren't allowed here.

  6. #46
    Join Date
    Jun 2006
    Posts
    311

    Default

    Quote Originally Posted by teekee View Post
    ...
    I think you are misunderstanding the intent behind the testing. The SQLITE test is admittedly simple - it does what is says, 12500 sequential inserts, but it has been found to have vast variance between filesystems. This is because it extremely sensitive to the fsync performance. In fact this test as it stands has contributed to a number of changes in a number of different projects.

    If you have a particular interest in seeing another test that focuses on a particular dimension, then by all means, define the test, create the test case. I am sure Michael will have no concerns including it within future comparisons. If your interest is in making SQLITE performance understood, please do so. I have personally asked the SQLITE team to assist, but received no input back.

    Making accusations of skill, awareness or intent is simply not helpful. Taking actions to assist is exteremely helpful.

    Regards,

    Matthew

  7. #47
    Join Date
    May 2009
    Posts
    6

    Default

    Yes, you are absolutely right about what the test currently does - basically its functionality can be described by the following pseudocode:

    Code:
    for (int i = 0; i < 12500; i++)
    {
      write into file f
      fsync(f);
    }
    But I'm really puzzled why the test is called "SQLite 12500 INSERTs" then and why it is not called "fsync performance test" and why it doesn't use the code described above instead of SQLite.

    And maybe you are right and the intention really is to test the speed of fsync. But then I would expect that this would be mentioned somewhere - so far I have seen about 10 "filesystem" performance tests here and it wasn't mentioned anywhere.

    Your comment is actually very good - the casual visitor can easily misunderstand the purpose of individual tests. Basically every test should have some description about what exactly it tests - right now there is nothing like that (or if I miss something, please point me to the place where it is). Without description the results are just some random numbers without any meaning. (e.g. should I interpret the SQLite test as a benchmark of how fast I can store 12500 entries into the database or as a fsync speed test [the first option is apparently wrong])

    Without proper description of the test, the reaction of kernel developers will most probably look like this:

    http://www.spinics.net/lists/linux-ext4/msg17152.html

    And I absolutely agree with Eric in this respect.

    In fact this test as it stands has contributed to a number of changes in a number of different projects.
    Could you tell me which projects and how it contributed to them? I'm quite curious about it.

    If your interest is in making SQLITE performance understood, please do so. I have personally asked the SQLITE team to assist, but received no input back.
    I don't quite understand what you mean here - the current output of the benchmark is absolutely expected. Unless you use PRAGMA synchronous=OFF during sqlite initialisation or enclose all your inserts in a single transaction, you won't get good performance. I can confidentially say that as a developer who uses sqlite.

    (OK, one interesting thing is that btrfs is so slow here - I would expect just the opposite [at least for larger amounts of data] since btrfs has to flush less data than ext3 during fsync)

    Making accusations of skill, awareness or intent is simply not helpful. Taking actions to assist is exteremely helpful.
    Right, if it was the kind of "I am smart, you are stupid" accusation, that would be absolutely useless. Rather my point was to point out the major problem of the benchmarks that appear here. Yes, I admit that I have chosen more provocative way, but from my experience, this is sometimes better because otherwise your message just gets unnoticed others. I'm pretty convinced about Michael's:

    1. skill - without his skill there wouldn't be this site, phoronix test suite and other things

    2. awareness - in the news section you can read many interesting things and I can see that Michael spends a lot of time finding them and watches broad coverage of topics

    3. intent - I'm absolutely sure about Michaels good intents (the only moment when I had doubts about it was when I thought that my post got censored - but these were just technical issues in fact)

    But enough of positive points, these are boring ;-). The problem is in the depth of knowledge one must have to create a good benchmark. You really have to understand deeply what you are testing - otherwise the output will be mostly garbage. I don't want to pretend I'm expert enough to produce a good benchmark - I'm simply not. Concerning this topic, however, you might find the following thread at ext4 mailing list interesting:

    http://www.spinics.net/lists/linux-ext4/msg16866.html

    Both Ted Ts'o (the main ext4 developer) and Chris Mason (the main btrfs developer) like the benchmarks that can be found at:

    http://btrfs.boxacle.net/

    See these posts regarding this:

    http://www.spinics.net/lists/linux-ext4/msg16871.html

    http://www.spinics.net/lists/linux-ext4/msg16963.html

    They would like to see file system performance history comparison, but the author of these benchmarks, Steven Pratt, doesn't seem to have time to process the results from these benchmarks:

    http://www.spinics.net/lists/linux-ext4/msg16974.html

    Now, pts is ideal for this, isn't it? How about asking Steve for the set of benchmarks he uses (+ the methodology he uses for the data analysis)? Of course, Michael doesn't have the same server machine so he won't be able to run the RAID tests, but it should be possible for the single disk tests. And I think that even kernel developers would be happy to see the results of the benchmarks.

    So, one more long post - believe me, I wouldn't write anything like this if I wanted to "just complain". I'm hoping for change - and the above proposal seems to be a good start, don't you think? Seriously, I think none of us reading phoronix forums is expert enough to make good benchmarks - let's leave this task to professionals...

    Kind regards,

    Jiri

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •