I do not agree on this. I suspect you have not studied complexity theory for parallell computers. There are many problems that are P-complete.
I think it is wrong to extrapolate from 4 cpus up to hundreds of CPUs
To me, the price is not important.
For the benchmark, I suspect it is designed to not be DataBase bound. That would be a bad benchmark. Most probably the speed of the RAM is important, not the DB.
Because what we see is that the CPU utilization is not 99%, and this can be because of a number of reasons:
1. There is not enough incoming requests to max out the server. It could be that the number of SAP clients used is too small, that they have congested the network etc. This would fit nice with the fact that the HP is using CPUs that is 200MHz stronger.
2. The CPU is waiting for I/O, either there is writes/reads to disk or the DB is holding the CPU back.
3. HP is displaying the userspace CPU utilization and is not including the kernel CPU utilization.
4. SAP is deadlocking semaphores on Linux but not on Solaris for whatever reason.
We simply do not know why, unless we perform the benchmark ourselves and take some significant time to debug it we cannot say why HO got the result they got.
All I can say is that there are very few things involved with scaling to CPUs within the kernel that can lower the CPU utilization, in 99% of cases where you see low CPU utilization it is either due to being I/O bound or not having enough requests to process.
Comment