Why do you not make anything more than a superficial attempt to interpret your own graphs? For instance, you say:
Unless we are seeing different graphs, that graph is all over the place, with a performance regression at 1600x900, but you make no mention of that.Lastly, with VDrift, which is the most demanding test in this article for running off LLVMpipe, the performance is up by an incredible 67%.
Would you try to make a better effort? Benchmarking sites that focus on Windows do a much better job on analysis. Here is one example:
Notice how the text analyzing the chart is actually reflective of its contents:
If you read through Anandtech's articles, you will see insightful comments about why things perform the way that they do. Their site is having some technical difficulties, so their SSD articles, which are the best examples of this, are offline, but just about all of their reviews on new things go into architectural details and why things perform the way that they do.Initially, the two SAN boxes deliver similar performance, with the Promise box at 2200 IOPS and the ZFS box at 2500 IOPS. The ZFS box with a L2ARC is able to magnify its performance by a factor of ten once the L2ARC is completely populated!
Why don't we see that here? Should the fact that this site focuses on Linux mean that a lower standard of benchmarking is acceptable? I find I can never rely on headlines or other statements here because they are not reflective of the actual benchmark results. I have to see the charts and even then, they are difficult to interpret given the paragraph format describing the test setup.