Announcement

Collapse
No announcement yet.

Benchmarks Of 2nd Gen AMD EPYC On Amazon EC2 Against Intel Xeon, Graviton2

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • PerformanceExpert
    replied
    Originally posted by milkylainen View Post

    Prices are more than likely heavily subsidized by Amazon for their own solution.
    You can't make your own ultra-high-end CPU, motherboards etc and fab it at low volume thinking it will compete price-wise with families made in counts of millions.
    Even if you could make it dirt cheap, RnD costs over units in this volume range is stupid high.
    You're grossly underestimating the scale of AWS - they have $35B revenue and many millions of servers. They already design most of the hardware themselves because at this scale it is significantly cheaper. Designing your own server chip is the obvious next step. Even if it costs say $1B, it's just $100 at 10 million units.

    Leave a comment:


  • PerformanceExpert
    replied
    Originally posted by smitty3268 View Post

    We don't know how many cores are on those Epyc parts. Assuming it's 64, then the tested configs here would have been ~140w for the Rome config vs ~100w for the ARM config. I also think it's safe to say that a 280w Rome config is being pushed over it's maximal efficiency limit - so a power use comparison would probably make more sense against a lower clocked Rome machine. There's no way of knowing if the same is true for the Graviton machine as well. Those kinds of changes on either side would obviously make the performance analysis here useless, though, so long story short this is a complicated subject and we don't have nearly enough information to draw any firm conclusions.
    In general you can't just claim 32 cores use half the power of 64 cores! We don't have enough details on 7R32 (base or turbo), but there are 16-core EPYC parts that have a 240W TDP, so 32 cores could well use 280W. Graviton 2 is likely more power optimized than the 7R32, but using 1/2 to 1/3rd of the power for equivalent performance is very efficient.

    Leave a comment:


  • smitty3268
    replied
    Originally posted by BS86 View Post

    well:


    about 100W for Graviton2 vs about 300W for Xeon's/Epyc's ...
    We don't know how many cores are on those Epyc parts. Assuming it's 64, then the tested configs here would have been ~140w for the Rome config vs ~100w for the ARM config. I also think it's safe to say that a 280w Rome config is being pushed over it's maximal efficiency limit - so a power use comparison would probably make more sense against a lower clocked Rome machine. There's no way of knowing if the same is true for the Graviton machine as well. Those kinds of changes on either side would obviously make the performance analysis here useless, though, so long story short this is a complicated subject and we don't have nearly enough information to draw any firm conclusions.

    Leave a comment:


  • BS86
    replied
    Originally posted by milkylainen View Post

    There is no magic sauce in performance. It does not boil down to ARM or x86 as a panacea for "energy efficient".
    I expect those ARM cores to consume more or less the same power [...]
    well:
    Originally posted by edwaleni View Post

    Per Anandtech:

    https://www.anandtech.com/show/15578...-intel-and-amd

    [...]Given the chip’s more conservative clock rate, Arm’s projected figure of around 105W for a 64-core 2.6GHz implementation, and Ampere’s recent disclosure of their 80-core 3GHz N1 server chip coming in at 210W, we estimate that the Graviton2 must come in around anywhere between 80W as a low estimate to around 110W for a pessimistic projection.
    about 100W for Graviton2 vs about 300W for Xeon's/Epyc's ...

    Leave a comment:


  • milkylainen
    replied
    Originally posted by AmericanLocomotive View Post
    I always find it strange how these systems seem to max out around 96 vCPUs on a 128 thread system. Do they really need to reserve 12 cores for the VM Hypervisor?
    It's for the AWS CCS admins to mine stuff while the customer is paying.

    Leave a comment:


  • AmericanLocomotive
    replied
    I always find it strange how these systems seem to max out around 96 vCPUs on a 128 thread system. Do they really need to reserve 12 cores for the VM Hypervisor?

    Leave a comment:


  • milkylainen
    replied
    Originally posted by BS86 View Post

    not really. Those monthly costs include energy consumption. I expect those ARM cores to consume a lot less energy and thus justify the "low" price.

    For the pricing of the *a instances, AWS says that they take the price of the non-*a instances and reduce it by 10% because the CPU's are cheaper and also consume less energy (see price difference c5 vs c5a and m5 vs m5a).

    The c5 instances were introduced at a cheaper monthly cost than the c4 instances although the c5 instances had a lot more power, and AWS said the lower cost was because of reduced energy consumption, making it a no-brainer to switch to c5, coming from c4. That's why I know that energy consumption takes a toll in the montly AWS costs in the price table.
    There is no magic sauce in performance. It does not boil down to ARM or x86 as a panacea for "energy efficient".
    I expect those ARM cores to consume more or less the same power given the same process node, transistor count, cache sizes etc.
    I've been through this like a zillion times on Phoronix already.
    ISA choice between ARM and x86 has almost no relevance to power efficiency in the ultra-high end.

    Edit: If it was not obvious by the text... uArch implementation > ISA choice for power efficiency.
    Last edited by milkylainen; 06-05-2020, 12:01 PM.

    Leave a comment:


  • snipe07
    replied
    Originally posted by BS86 View Post
    The c5a instances are meant to be an alternative for the c5 instances. Why did you not benchmark c5a vs c5?
    I would also like to see these new C5a instances compared against the existing C5 instances. The way AWS has labelled them it makes sense they are meant to be alternatives for each other.

    Leave a comment:


  • edwaleni
    replied
    Originally posted by BS86 View Post

    not really. Those monthly costs include energy consumption. I expect those ARM cores to consume a lot less energy and thus justify the "low" price.

    For the pricing of the *a instances, AWS says that they take the price of the non-*a instances and reduce it by 10% because the CPU's are cheaper and also consume less energy (see price difference c5 vs c5a and m5 vs m5a).

    The c5 instances were introduced at a cheaper monthly cost than the c4 instances although the c5 instances had a lot more power, and AWS said the lower cost was because of reduced energy consumption, making it a no-brainer to switch to c5, coming from c4. That's why I know that energy consumption takes a toll in the montly AWS costs in the price table.
    Per Anandtech:

    https://www.anandtech.com/show/15578...-intel-and-amd

    Total power consumption of the SoC is something that Amazon wasn’t too willing to disclose in the context of our article – the company is still holding some aspects of the design close to its chest even though we were able to test the new chipset in the cloud. Given the chip’s more conservative clock rate, Arm’s projected figure of around 105W for a 64-core 2.6GHz implementation, and Ampere’s recent disclosure of their 80-core 3GHz N1 server chip coming in at 210W, we estimate that the Graviton2 must come in around anywhere between 80W as a low estimate to around 110W for a pessimistic projection.

    Leave a comment:


  • trifud
    replied
    Would be nice to see some HPC benchmarks.

    Leave a comment:

Working...
X