Announcement

Collapse
No announcement yet.

Benchmarks Of 2nd Gen AMD EPYC On Amazon EC2 Against Intel Xeon, Graviton2

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by BS86 View Post
    c - instances have only half the RAM that m - instances have at the same core-count. So comparison is a bit off here.
    Sure. But that tilts the subsidizing factor to even larger sums?
    If you have a custom Ultra-High-End ARM core, Custom Motherboard and system design,
    plus double the amount of RAM than a mass produced EPYC system?

    It would be interesting to see figures on how much Amazon is pouring into development and production of these cores.
    Annapurna Labs are hardly a zero cost operation...

    Comment


    • #12
      Originally posted by milkylainen View Post

      Sure. But that tilts the subsidizing factor to even larger sums?
      If you have a custom Ultra-High-End ARM core, Custom Motherboard and system design,
      plus double the amount of RAM than a mass produced EPYC system?
      not really. Those monthly costs include energy consumption. I expect those ARM cores to consume a lot less energy and thus justify the "low" price.

      For the pricing of the *a instances, AWS says that they take the price of the non-*a instances and reduce it by 10% because the CPU's are cheaper and also consume less energy (see price difference c5 vs c5a and m5 vs m5a).

      The c5 instances were introduced at a cheaper monthly cost than the c4 instances although the c5 instances had a lot more power, and AWS said the lower cost was because of reduced energy consumption, making it a no-brainer to switch to c5, coming from c4. That's why I know that energy consumption takes a toll in the montly AWS costs in the price table.
      Last edited by BS86; 05 June 2020, 06:47 AM.

      Comment


      • #13
        Would be nice to see some HPC benchmarks.

        Comment


        • #14
          Originally posted by BS86 View Post

          not really. Those monthly costs include energy consumption. I expect those ARM cores to consume a lot less energy and thus justify the "low" price.

          For the pricing of the *a instances, AWS says that they take the price of the non-*a instances and reduce it by 10% because the CPU's are cheaper and also consume less energy (see price difference c5 vs c5a and m5 vs m5a).

          The c5 instances were introduced at a cheaper monthly cost than the c4 instances although the c5 instances had a lot more power, and AWS said the lower cost was because of reduced energy consumption, making it a no-brainer to switch to c5, coming from c4. That's why I know that energy consumption takes a toll in the montly AWS costs in the price table.
          Per Anandtech:

          https://www.anandtech.com/show/15578...-intel-and-amd

          Total power consumption of the SoC is something that Amazon wasn’t too willing to disclose in the context of our article – the company is still holding some aspects of the design close to its chest even though we were able to test the new chipset in the cloud. Given the chip’s more conservative clock rate, Arm’s projected figure of around 105W for a 64-core 2.6GHz implementation, and Ampere’s recent disclosure of their 80-core 3GHz N1 server chip coming in at 210W, we estimate that the Graviton2 must come in around anywhere between 80W as a low estimate to around 110W for a pessimistic projection.

          Comment


          • #15
            Originally posted by BS86 View Post
            The c5a instances are meant to be an alternative for the c5 instances. Why did you not benchmark c5a vs c5?
            I would also like to see these new C5a instances compared against the existing C5 instances. The way AWS has labelled them it makes sense they are meant to be alternatives for each other.

            Comment


            • #16
              Originally posted by BS86 View Post

              not really. Those monthly costs include energy consumption. I expect those ARM cores to consume a lot less energy and thus justify the "low" price.

              For the pricing of the *a instances, AWS says that they take the price of the non-*a instances and reduce it by 10% because the CPU's are cheaper and also consume less energy (see price difference c5 vs c5a and m5 vs m5a).

              The c5 instances were introduced at a cheaper monthly cost than the c4 instances although the c5 instances had a lot more power, and AWS said the lower cost was because of reduced energy consumption, making it a no-brainer to switch to c5, coming from c4. That's why I know that energy consumption takes a toll in the montly AWS costs in the price table.
              There is no magic sauce in performance. It does not boil down to ARM or x86 as a panacea for "energy efficient".
              I expect those ARM cores to consume more or less the same power given the same process node, transistor count, cache sizes etc.
              I've been through this like a zillion times on Phoronix already.
              ISA choice between ARM and x86 has almost no relevance to power efficiency in the ultra-high end.

              Edit: If it was not obvious by the text... uArch implementation > ISA choice for power efficiency.
              Last edited by milkylainen; 05 June 2020, 12:01 PM.

              Comment


              • #17
                I always find it strange how these systems seem to max out around 96 vCPUs on a 128 thread system. Do they really need to reserve 12 cores for the VM Hypervisor?

                Comment


                • #18
                  Originally posted by AmericanLocomotive View Post
                  I always find it strange how these systems seem to max out around 96 vCPUs on a 128 thread system. Do they really need to reserve 12 cores for the VM Hypervisor?
                  It's for the AWS CCS admins to mine stuff while the customer is paying.

                  Comment


                  • #19
                    Originally posted by milkylainen View Post

                    There is no magic sauce in performance. It does not boil down to ARM or x86 as a panacea for "energy efficient".
                    I expect those ARM cores to consume more or less the same power [...]
                    well:
                    Originally posted by edwaleni View Post

                    Per Anandtech:

                    https://www.anandtech.com/show/15578...-intel-and-amd

                    [...]Given the chip’s more conservative clock rate, Arm’s projected figure of around 105W for a 64-core 2.6GHz implementation, and Ampere’s recent disclosure of their 80-core 3GHz N1 server chip coming in at 210W, we estimate that the Graviton2 must come in around anywhere between 80W as a low estimate to around 110W for a pessimistic projection.
                    about 100W for Graviton2 vs about 300W for Xeon's/Epyc's ...

                    Comment


                    • #20
                      Originally posted by BS86 View Post

                      well:


                      about 100W for Graviton2 vs about 300W for Xeon's/Epyc's ...
                      We don't know how many cores are on those Epyc parts. Assuming it's 64, then the tested configs here would have been ~140w for the Rome config vs ~100w for the ARM config. I also think it's safe to say that a 280w Rome config is being pushed over it's maximal efficiency limit - so a power use comparison would probably make more sense against a lower clocked Rome machine. There's no way of knowing if the same is true for the Graviton machine as well. Those kinds of changes on either side would obviously make the performance analysis here useless, though, so long story short this is a complicated subject and we don't have nearly enough information to draw any firm conclusions.

                      Comment

                      Working...
                      X