They are all basically equivalent.
Originally Posted by bridgman
In the compilation benchmark, you can think of the performance as being measured in "compilations per second" (since the compilation takes many seconds, the numerical value would be less than one).
So whatever the benchmark, the performance can be stated as "operations per second" for some definition of operation (compilation, frame render, etc.). The denominator is Power (or average Power) which is equal to Energy per second. So in all cases:
But since the computational task is the same for each CPU within a given benchmark, we can define (normalize) the operation to always be equal to 1 (for any benchmark) -- in other words, one operation equals one compilation, one set of frame renderings, etc.
Performance per Watt = ( operations / sec ) / ( Energy / sec ) = operations / Energy
Bottom line is that comparing relative Performance per Watt among various CPUs is equivalent to comparing the reciprocal of total energy required to complete the computation for each CPU. And total energy required can be determined by first finding the average power consumed during the computation and multiplying that by the time required to complete the computation. This is exactly what tuke81 did in his graph in this thread.