Code:
for (int i = 0; i < 12500; i++) { write into file f fsync(f); }
And maybe you are right and the intention really is to test the speed of fsync. But then I would expect that this would be mentioned somewhere - so far I have seen about 10 "filesystem" performance tests here and it wasn't mentioned anywhere.
Your comment is actually very good - the casual visitor can easily misunderstand the purpose of individual tests. Basically every test should have some description about what exactly it tests - right now there is nothing like that (or if I miss something, please point me to the place where it is). Without description the results are just some random numbers without any meaning. (e.g. should I interpret the SQLite test as a benchmark of how fast I can store 12500 entries into the database or as a fsync speed test [the first option is apparently wrong])
Without proper description of the test, the reaction of kernel developers will most probably look like this:
http://www.spinics.net/lists/linux-ext4/msg17152.html
And I absolutely agree with Eric in this respect.
In fact this test as it stands has contributed to a number of changes in a number of different projects.
If your interest is in making SQLITE performance understood, please do so. I have personally asked the SQLITE team to assist, but received no input back.
(OK, one interesting thing is that btrfs is so slow here - I would expect just the opposite [at least for larger amounts of data] since btrfs has to flush less data than ext3 during fsync)
Making accusations of skill, awareness or intent is simply not helpful. Taking actions to assist is exteremely helpful.
1. skill - without his skill there wouldn't be this site, phoronix test suite and other things
2. awareness - in the news section you can read many interesting things and I can see that Michael spends a lot of time finding them and watches broad coverage of topics
3. intent - I'm absolutely sure about Michaels good intents (the only moment when I had doubts about it was when I thought that my post got censored - but these were just technical issues in fact)
But enough of positive points, these are boring ;-). The problem is in the depth of knowledge one must have to create a good benchmark. You really have to understand deeply what you are testing - otherwise the output will be mostly garbage. I don't want to pretend I'm expert enough to produce a good benchmark - I'm simply not. Concerning this topic, however, you might find the following thread at ext4 mailing list interesting:
http://www.spinics.net/lists/linux-ext4/msg16866.html
Both Ted Ts'o (the main ext4 developer) and Chris Mason (the main btrfs developer) like the benchmarks that can be found at:
http://btrfs.boxacle.net/
See these posts regarding this:
http://www.spinics.net/lists/linux-ext4/msg16871.html
http://www.spinics.net/lists/linux-ext4/msg16963.html
They would like to see file system performance history comparison, but the author of these benchmarks, Steven Pratt, doesn't seem to have time to process the results from these benchmarks:
http://www.spinics.net/lists/linux-ext4/msg16974.html
Now, pts is ideal for this, isn't it? How about asking Steve for the set of benchmarks he uses (+ the methodology he uses for the data analysis)? Of course, Michael doesn't have the same server machine so he won't be able to run the RAID tests, but it should be possible for the single disk tests. And I think that even kernel developers would be happy to see the results of the benchmarks.
So, one more long post - believe me, I wouldn't write anything like this if I wanted to "just complain". I'm hoping for change - and the above proposal seems to be a good start, don't you think? Seriously, I think none of us reading phoronix forums is expert enough to make good benchmarks - let's leave this task to professionals...
Kind regards,
Jiri
Leave a comment: