Do you mean the relative performance of the results with disabled optimizations? Achieving good performance of the generated code is not the target for such builds at all. They are mostly useful for two reasons:
1. disabling optimizations significantly reduces the chances of triggering some compiler bugs (can be specifically wanted for bootstrapping or debugging purposes)
2. makes the process of compilation faster (speeds up the development)
Now consider the following analogy. Imagine that you ask two guys to walk from point A to point B without mentioning the real purpose of this activity. Moreover, you the only hint you give them is to be absolutely sure not to slip and fall. But once one of the guys reaches the destination, you suddenly award him the "fastest runner" prize And if anybody tries to complain, you just say that the conditions are the same, there surely must be some correlation between how fast a person can walk and run, so such competition is fair and the relative running speed must be still the same.
The same applies to processors and compilers. Optimized build can be easily several times faster than non-optimized build. And you can't generally predict this ratio beforehand.
And in the real world everything is even more tricky. One of the guys could have known beforehand that you have a habit of holding such competitions So he might easily use this information to his advantage. In any case, the only valid benchmarking method is to enable the best optimizations because such benchmarks are reflecting real performance and can't be easily cheated.
PS. Exynos is "faster" than Atom even in the existing SMALLPT test. But this does not change the fact that the test itself is broken.
1. disabling optimizations significantly reduces the chances of triggering some compiler bugs (can be specifically wanted for bootstrapping or debugging purposes)
2. makes the process of compilation faster (speeds up the development)
Now consider the following analogy. Imagine that you ask two guys to walk from point A to point B without mentioning the real purpose of this activity. Moreover, you the only hint you give them is to be absolutely sure not to slip and fall. But once one of the guys reaches the destination, you suddenly award him the "fastest runner" prize And if anybody tries to complain, you just say that the conditions are the same, there surely must be some correlation between how fast a person can walk and run, so such competition is fair and the relative running speed must be still the same.
The same applies to processors and compilers. Optimized build can be easily several times faster than non-optimized build. And you can't generally predict this ratio beforehand.
And in the real world everything is even more tricky. One of the guys could have known beforehand that you have a habit of holding such competitions So he might easily use this information to his advantage. In any case, the only valid benchmarking method is to enable the best optimizations because such benchmarks are reflecting real performance and can't be easily cheated.
PS. Exynos is "faster" than Atom even in the existing SMALLPT test. But this does not change the fact that the test itself is broken.
Comment