*** that means that sqlite, mysql, and pgsql have ALL CHANGED since those tests were made back in.... 2001!!!!
2) TRANSACTIONS ARE NOT SYNCHRONIZATIONS! They are TOTALLY UNRELATED!
Be aware that a sensible database server does NOT do what you are showing there.... just because it is a transaction does NOT mean that you shouldn't synchronize to disk. You are just synchronizing to a temporary place instead of the final place. Running in a transaction should NOT affect performance.
Note these results here:
SQLite 2.7.6: 13.061
SQLite 2.7.6 (nosync): 0.223
** you see what difference synchronization makes? It runs 58.57 TIMES as fast without synchronization. This is entirely because of the write latency of mechanical disks!
Point of interest... they claim that it is running "nearly as fast as mysql" when sync is disabled.... according to who? Seems to me that mysql is completely destroying it in this test running at virtually DOUBLE THE SPEED.
On test 2 in that page, you throw the queries into a transaction, which disables (in the case of sqlite, but not mysql or pgsql) synchronization, and you see the sync and nosync values converge, sure. That's nice, but there's no synchronization, so its just writing to buffers and is therefore NOT A VALID TEST OF DISK PERFORMANCE.
In other words... test 2 is NOT APPLICABLE.
Further, it appears that this OLD version of sqlite was actually QUITE BROKEN when it comes to transactions. It should STILL be synchronizing, to a TEMPORARY DATABASE. What would happen if you have 32 MB of RAM (common for the old days when those tests were made) and ran a 100 MB transaction? Kablammo! LOL. And what would happen with that transaction? Well you wouldn't even have evidence that the transaction failed since it isn't synchronizing.
This type of transaction is a little bit weird. In some cases, it can be used for nightly mass updates (which you aren't going to be doing with sqlite to begin with... note that this is typically done by locking the database rather than running it as a transaction) but in most cases, it is used to ensure data integrity when dealing with multiple clients connecting simultaneously, ensuring that you don't, for example, have a select happening DURING an UPDATE and sending back invalid data. A typical transaction has to happen FAST so that the database doesn't go out during that transaction.... so you might do a transaction with 5 queries in it followed by a sync. So now start thinking about thousands of 5-query transactions, each followed by a sync. Well now you've completely lost any performance advantage that you perceived as being associated with transactions.
In other words: It doesn't help to prove performance under conditions that are never going to happen.
Transactions are NOT a method of boosting performance. They are a method of ensuring DATABASE CONSISTENCY by (a) applying all queries according to established parameters or rolling back, OR (b) by ensuring that all data in the database is consistent by ensuring that queries are performed in a reasonable ORDER (i.e. you perform an update while someone else performs an insert related to the same data -- bad things can happen).