Announcement

Collapse
No announcement yet.

AMD EPYC Milan Still Gives Intel Sapphire Rapids Tough Competition In The Cloud

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • smitty3268
    replied
    Originally posted by onlyLinuxLuvUBack View Post
    • Look for information on specific tools or services offered by Google Cloud for monitoring power usage, such as Cloud IoT Core mentioned in result [4]. This may require setting up and configuring the tool/service to get accurate power usage information.
    • Explore third-party tools or services that integrate with Google Cloud and provide features for tracking or monitoring power usage. This may involve researching and evaluating different options to find a solution that fits your specific needs.
    • [4] Contact Google Cloud customer support for assistance on getting power usage information. They may be able to provide guidance on the best approach based on your specific use case and requirements.
    Power use on a cloud system is pointless. You'd just be seeing how busy the system is overall based on other customers activity on the same physical machine as the one you happen to currently be assigned to.

    It's not difficult to find reviews of whole physical servers if you want to find out the general power efficiency. Most servers are capped at the same power usage, so you can just divide that by the # of cores.

    Leave a comment:


  • billbo
    replied
    Originally posted by Michael View Post

    A geo mean overall is used since really not possible to have an "average" otherwise. To some AI software may be 'niche' while certainly not to others, many cloud instances are just used for CI / build farm type jobs, database servers, etc etc.... So to weight different tests on real-world usage.is next to impossible given diverse use-cases and diverse user interests. None of the tests run were purely synthetic like say stress-ng, hackbench, etc, but all represent some real workload.
    I believe that some of the build to order gaming PC vendors have a "tell us the games you want to play/graphics settings and we'll tell you what components to use" configuration guru. Perhaps in your copious spare time, an enhancement to the openbenchmarking.org site could be added where someone could click on the list of benchmarks (or category groups) with expected relative usage and every system ever tested on my custom workload set would be rank ordered. If I was still in the business of buying lots of hardware, I could seen even paying for such a service. Or do you already do custom testing and analysis for corporate clients? If so, software like I suggest might make that easier to do.

    Leave a comment:


  • onlyLinuxLuvUBack
    replied
    Originally posted by Michael View Post

    There are no power metrics exposed on Google Cloud (or most other public clouds...) the PowerCap/RAPL interfaces and the like are not accessible.
    • Look for information on specific tools or services offered by Google Cloud for monitoring power usage, such as Cloud IoT Core mentioned in result [4]. This may require setting up and configuring the tool/service to get accurate power usage information.
    • Explore third-party tools or services that integrate with Google Cloud and provide features for tracking or monitoring power usage. This may involve researching and evaluating different options to find a solution that fits your specific needs.
    • [4] Contact Google Cloud customer support for assistance on getting power usage information. They may be able to provide guidance on the best approach based on your specific use case and requirements.

    Leave a comment:


  • ddriver
    replied
    Close enough. Thanks!

    Leave a comment:


  • Michael
    replied
    Originally posted by loganj View Post
    did i miss something or there is no power consumption graphs?
    There are no power metrics exposed on Google Cloud (or most other public clouds...) the PowerCap/RAPL interfaces and the like are not accessible.

    Leave a comment:


  • Michael
    replied
    Originally posted by ddriver View Post
    Ok, so what about a geo mean for every test group, you have them more or less grouped by use case as they are paged in the review.

    That would give users interested in particular use cases a quick and accurate glance at the average performance metrics they are interested in, without the noise from everything else they might not care that much about.
    There is already that if you go to the OB link and click " Show Geometric Means Per-Suite/Category​"

    e.g. https://openbenchmarking.org/result/...7a1e15ea68bf98

    (Though some caveats / corner cases like showing the "NVIDIA GPU Compute Tests" graph there for workloads that can be accelerated via NVIDIA GPUs but wasn't the case for this actual round of testing and other limitations based on it all being automated, etc, so make sure using common sense when looking at those data points.)

    Leave a comment:


  • loganj
    replied
    did i miss something or there is no power consumption graphs?

    Leave a comment:


  • ddriver
    replied
    Ok, so what about a geo mean for every test group, you have them more or less grouped by use case as they are paged in the review.

    That would give users interested in particular use cases a quick and accurate glance at the average performance metrics they are interested in, without the noise from everything else they might not care that much about.

    Leave a comment:


  • Michael
    replied
    Originally posted by ddriver View Post
    Amd's general purpose performance appears to be still far ahead of intel, the latter manages to catch on on the average via some well optimized corner cases.

    It would be nice to have an "average" performance index where each test result is weighted based on its real world usage. Some of those are synthetic benches, some are niche software, some are routinely used throughout the industry.
    A geo mean overall is used since really not possible to have an "average" otherwise. To some AI software may be 'niche' while certainly not to others, many cloud instances are just used for CI / build farm type jobs, database servers, etc etc.... So to weight different tests on real-world usage.is next to impossible given diverse use-cases and diverse user interests. None of the tests run were purely synthetic like say stress-ng, hackbench, etc, but all represent some real workload.

    Leave a comment:


  • ddriver
    replied
    Amd's general purpose performance appears to be still far ahead of intel, the latter manages to catch up on on the average via some well optimized corner cases.

    It would be nice to have an "average" performance index where each test result is weighted based on its real world usage. Some of those are synthetic benches, some are niche software, some are routinely used throughout the industry.

    Leave a comment:

Working...
X