Announcement

Collapse
No announcement yet.

AMD EPYC Milan Still Gives Intel Sapphire Rapids Tough Competition In The Cloud

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • AMD EPYC Milan Still Gives Intel Sapphire Rapids Tough Competition In The Cloud

    Phoronix: AMD EPYC Milan Still Gives Intel Sapphire Rapids Tough Competition In The Cloud

    While waiting for AMD 4th Gen EPYC "Genoa" instances to become available via the major public cloud providers, I was curious to see how existing AMD EPYC Milan instances compare to Intel's new Sapphire Rapids instances in public preview on Google Cloud. While expecting some friendly competition, at the same vCPU size EPYC Milan was managing to deliver not only better performance-per-dollar but also even better raw performance in numerous workloads against the Google Cloud C3 Sapphire Rapids.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Amd's general purpose performance appears to be still far ahead of intel, the latter manages to catch up on on the average via some well optimized corner cases.

    It would be nice to have an "average" performance index where each test result is weighted based on its real world usage. Some of those are synthetic benches, some are niche software, some are routinely used throughout the industry.

    Comment


    • #3
      Originally posted by ddriver View Post
      Amd's general purpose performance appears to be still far ahead of intel, the latter manages to catch on on the average via some well optimized corner cases.

      It would be nice to have an "average" performance index where each test result is weighted based on its real world usage. Some of those are synthetic benches, some are niche software, some are routinely used throughout the industry.
      A geo mean overall is used since really not possible to have an "average" otherwise. To some AI software may be 'niche' while certainly not to others, many cloud instances are just used for CI / build farm type jobs, database servers, etc etc.... So to weight different tests on real-world usage.is next to impossible given diverse use-cases and diverse user interests. None of the tests run were purely synthetic like say stress-ng, hackbench, etc, but all represent some real workload.
      Michael Larabel
      https://www.michaellarabel.com/

      Comment


      • #4
        Ok, so what about a geo mean for every test group, you have them more or less grouped by use case as they are paged in the review.

        That would give users interested in particular use cases a quick and accurate glance at the average performance metrics they are interested in, without the noise from everything else they might not care that much about.

        Comment


        • #5
          did i miss something or there is no power consumption graphs?

          Comment


          • #6
            Originally posted by ddriver View Post
            Ok, so what about a geo mean for every test group, you have them more or less grouped by use case as they are paged in the review.

            That would give users interested in particular use cases a quick and accurate glance at the average performance metrics they are interested in, without the noise from everything else they might not care that much about.
            There is already that if you go to the OB link and click " Show Geometric Means Per-Suite/Category​"

            e.g. https://openbenchmarking.org/result/...7a1e15ea68bf98

            (Though some caveats / corner cases like showing the "NVIDIA GPU Compute Tests" graph there for workloads that can be accelerated via NVIDIA GPUs but wasn't the case for this actual round of testing and other limitations based on it all being automated, etc, so make sure using common sense when looking at those data points.)
            Michael Larabel
            https://www.michaellarabel.com/

            Comment


            • #7
              Originally posted by loganj View Post
              did i miss something or there is no power consumption graphs?
              There are no power metrics exposed on Google Cloud (or most other public clouds...) the PowerCap/RAPL interfaces and the like are not accessible.
              Michael Larabel
              https://www.michaellarabel.com/

              Comment


              • #8
                Close enough. Thanks!

                Comment


                • #9
                  Originally posted by Michael View Post

                  There are no power metrics exposed on Google Cloud (or most other public clouds...) the PowerCap/RAPL interfaces and the like are not accessible.
                  • Look for information on specific tools or services offered by Google Cloud for monitoring power usage, such as Cloud IoT Core mentioned in result [4]. This may require setting up and configuring the tool/service to get accurate power usage information.
                  • Explore third-party tools or services that integrate with Google Cloud and provide features for tracking or monitoring power usage. This may involve researching and evaluating different options to find a solution that fits your specific needs.
                  • [4] Contact Google Cloud customer support for assistance on getting power usage information. They may be able to provide guidance on the best approach based on your specific use case and requirements.

                  Comment


                  • #10
                    Originally posted by Michael View Post

                    A geo mean overall is used since really not possible to have an "average" otherwise. To some AI software may be 'niche' while certainly not to others, many cloud instances are just used for CI / build farm type jobs, database servers, etc etc.... So to weight different tests on real-world usage.is next to impossible given diverse use-cases and diverse user interests. None of the tests run were purely synthetic like say stress-ng, hackbench, etc, but all represent some real workload.
                    I believe that some of the build to order gaming PC vendors have a "tell us the games you want to play/graphics settings and we'll tell you what components to use" configuration guru. Perhaps in your copious spare time, an enhancement to the openbenchmarking.org site could be added where someone could click on the list of benchmarks (or category groups) with expected relative usage and every system ever tested on my custom workload set would be rank ordered. If I was still in the business of buying lots of hardware, I could seen even paying for such a service. Or do you already do custom testing and analysis for corporate clients? If so, software like I suggest might make that easier to do.

                    Comment

                    Working...
                    X