Announcement

Collapse
No announcement yet.

Amazon Talks Up Big Performance Gains For Their 7nm Graviton2 CPUs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • edwaleni
    replied
    Amazon is doing what most companies do when they get market share and need to reinvest the profits.

    They start integrating vertically.

    When Ford dominated the auto market, they also vertically integrated in the early 1900's. They built their own steel foundry (River Rouge), they built their own shipping line, (Ore freighters) they even owned the iron ore mines and some of the local railroads. They even got into electronics (Philco), farming implements, just to name some.

    So the fact Amazon is entering new markets that are complementary is no surprise.



    Leave a comment:


  • coder
    replied
    Okay, it seems the Huawei PC will use a smaller version of this chip:



    Which seems to feature a custom, A72-derived core.

    Leave a comment:


  • coder
    replied
    Originally posted by dnpp123 View Post
    Does anyone know where to buy a dev board with a SoC having an ARM Neoverse core ? I would be really interested in trying them out,
    Huawei just announced ARM-based PCs with 4- and 8-core CPUs. We know they've been getting early access to ARM IP and are a Neoverse customer, but I'm not sure they announced what kind of cores are in that new "PC" device.

    Otherwise, I think you're in a tough spot. The N1 is a new core and targeted at server applications. So, the old trick of taking some low-cost tablet-oriented SoC and slapping it on a hobbyist board probably won't work for that case.

    Leave a comment:


  • dnpp123
    replied
    Does anyone know where to buy a dev board with a SoC having an ARM Neoverse core ? I would be really interested in trying them out,

    Leave a comment:


  • coder
    replied
    Originally posted by milkylainen View Post
    Almost all modern high-end microarchs are post-risc-macro-op-vliw-whatever on the inside.
    Instruction decode of a modern microarch is almost a rounding error in performance and power envelope.

    Modern performance roughly translates to spent-transistors / spent-power / fabrication process. Regardless of ISA.
    This, according to whom?

    ISAs are more than just different dialects of the same language. For instance, ARM has weaker memory ordering guarantees than x86. Also, things like the size of the software-visible register file affects the amount of register spills and how much work the register rename logic has to do. I'm sure there are other things, besides.

    As for modern uArchs, the x86 guys care a lot more about single-thread perf. So, they need to design to a shorter critical path so they can make higher clock targets. ARM, being traditionally more focused on power-efficiency, tends to target lower clock speeds, which means shorter pipelines and getting more work done per cycle.

    So, there are a number of reasons why ARM can and should be more efficient, beyond the mere fact that their instruction words are cheaper to decode (BTW, which also impacts pipeline length & therefore branch mis-prediction penalties).

    Of course, empirical data would be the gold standard, but it's hard to do a strict apples-to-apples comparison of server SoCs, because the chips that have been made to date are largely on inferior processes to what AMD and Intel are using. Now, if you want to go into embedded use cases, there's a wealth of data showing ARM is more power-efficient, there. But, since ARM has diverged their cores between the two markets, that data is less predictive of how the Graviton 2 will behave than in previous generations of chips.

    Rest assured, time will tell. When these Graviton 2 instances come online, I'm sure Phoronix will be among the first to publish benchmarks. And even if there's no way to get reliable power figures, I'm sure Amazon won't be the only server chip out there using the Neoverse N1 cores.
    Last edited by coder; 04 December 2019, 06:46 AM.

    Leave a comment:


  • coder
    replied
    Originally posted by torsionbar28 View Post
    Amazon is playing the long game here. I don't think it's far fetched to believe they want to compete with intel and AMD for a piece of the datacenter hardware pie.
    Uh, why would they sell their custom hardware to their competition? That would be like Google selling TPUs. And they don't want to sell it to customers, either -- these guys would much rather rent it to you.

    Tying into what you were saying about the content business, that is also what all the content producers are moving towards - showing you their content, without having to sell you a copy that you can control.

    Getting back to hardware, if their stuff is so good, then Amazon & Google will sit on it to give them an advantage. If it's not so good, then there won't be market demand, so it won't sell.
    Last edited by coder; 04 December 2019, 02:24 AM.

    Leave a comment:


  • coder
    replied
    Originally posted by torsionbar28 View Post
    RAM TB per socket has been the limiting factor for nearly all my clients.
    Wow, so you must be all up in the Optane DIMMs, then?

    Leave a comment:


  • torsionbar28
    replied
    Originally posted by milkylainen View Post
    I don't think this will actually outperform contemporary x86 CPUs on the same power budget. Similar. Yes. Perhaps. But to what cost?
    You're looking at it from the wrong angle. It's not about the cost. It's about creating a product that can be sold for a profit. This is the story of AWS's inception after all. Creating a thing for internal use, and then recognizing the market demand for this thing. I'd bet the same story applies to these Amazon ARM chips. If Amazon can prove that they perform well in virtualization and cloud scenarios, there will be a market demand for them. Remember, Amazon is now creating their own original TV show and movie content for their video streaming service. Amazon is playing the long game here. I don't think it's far fetched to believe they want to compete with intel and AMD for a piece of the datacenter hardware pie.

    Leave a comment:


  • torsionbar28
    replied
    Originally posted by CommunityMember View Post
    There are certain workloads where raw CPU is not a/the bottleneck. A slower, cheaper, instance can have pricing benefits for such use cases, and ARM may be an appropriate fit where other solutions (such as serverless via lamda) will not work (or will take too long to transition to).
    Not just certain workloads, but the vast majority of internal business server workloads. Most business hypervisor environments run out of memory long before they peak out the CPU's. RAM TB per socket has been the limiting factor for nearly all my clients. Quite frankly, the CPU doesn't much matter any more these days. I have customers running still on Dell R815 servers (AMD Opteron 6200 series) and even with dozens of VM's, aren't exceeding 15% CPU utilization.

    Things like scientific workloads, rendering, or audio/video processing all work way better on bare metal, or if virtualized, require very fast CPU's. But most businesses don't run these workloads. They run things like Web applications, databases, email, file servers, etc. which run great on even low-end hardware.

    Originally posted by CommunityMember View Post
    And as to why, this is always about money. AWS believes they see a way to make money on this.
    ^ yup. This is *the* reason.
    Last edited by torsionbar28; 04 December 2019, 02:01 AM.

    Leave a comment:


  • milkylainen
    replied
    Originally posted by coder View Post
    However, the intrinsic perf/W advantages of ARM vs. x86 have been long- and well- established.
    I do agree on most things but not necessarily this one.
    Almost all modern high-end microarchs are post-risc-macro-op-vliw-whatever on the inside.
    Instruction decode of a modern microarch is almost a rounding error in performance and power envelope.

    Modern performance roughly translates to spent-transistors / spent-power / fabrication process. Regardless of ISA.
    Spending in each category translates to characteristics that are comparable within equal-sized cpus with the same power budget built in an equal fabrication process.
    The microarch teams building these CPUs make deliberate tradeoffs for a specific target. So while differences do exist there is no magic sauce to it.

    So while yes, a ARM can be more power efficient than a x86 CPU for certain tasks it would take a beating in other categories.
    Single threaded performance, Housing density, etc.

    Traditionally, ARM has held the low end and x86 held a higher end.
    Now. x86 has been working on power efficiency and ARM on beefier cores.

    I would, without hesitation, say that if you want a lot of transistors doing efficient work for as variable workload as possible for as little money as possible, you go x86.
    If you want battery powered or some other specialty, well that is another question.

    I don't think this will actually outperform contemporary x86 CPUs on the same power budget. Similar. Yes. Perhaps. But to what cost?
    Edit: Comparing an unreleased CPU to almost 3 year old microarch-releases does not say much on how it will perform against contemporary CPUs when released.
    Last edited by milkylainen; 04 December 2019, 01:59 AM.

    Leave a comment:

Working...
X