Originally Posted by

**bridgman**
I guess it depends on whether the benchmarks are run for a specific period of time and you "count the results" (eg Tropics, where the "score" the FPS) or whether they run until a certain amount of work is done and stop (eg C-Ray or timed kernel compile where the "score" is effectively 1 over the time taken to complete the work).

In the first case time cancels out because the score is effectively # frames over energy required or (average FPS * time) / (average W * time), but in the second case the score is effectively a constant (fixed amount of work) over energy required or 1 / (average W * time). The denominator is the same in both cases but you don't have time in the numerator for the first two benchmarks.

They are all basically equivalent.

In the compilation benchmark, you can think of the performance as being measured in "compilations per second" (since the compilation takes many seconds, the numerical value would be less than one).

So whatever the benchmark, the performance can be stated as "operations per second" for some definition of operation (compilation, frame render, etc.). The denominator is Power (or average Power) which is equal to Energy per second. So in all cases:

Code:

Performance per Watt = ( operations / sec ) / ( Energy / sec ) = operations / Energy

But since the computational task is the same for each CPU within a given benchmark, we can define (normalize) the operation to always be equal to 1 (for any benchmark) -- in other words, one operation equals one compilation, one set of frame renderings, etc.

Bottom line is that comparing relative Performance per Watt among various CPUs is equivalent to comparing the reciprocal of total energy required to complete the computation for each CPU. And total energy required can be determined by first finding the average power consumed during the computation and multiplying that by the time required to complete the computation. This is exactly what tuke81 did in his graph in this thread.