Announcement

Collapse
No announcement yet.

Linux Prepares For Next-Gen AMD CPUs With Up To 12 CCDs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • coder
    replied
    Originally posted by TemplarGR View Post
    As for Intel not having big.little in the server space, it is only a matter of time pal.
    It occurred to me that the obvious first step will be customers pairing a "big-cores" CPU with a "small-cores" CPU, in a dual-CPU configuration. If that works well and becomes popular, then we indeed might start to see heterogeneous cores on the same chip. And it looks like it'll be AMD who's first to market with CPUs capable of such a configuration.

    This mix-n-match is not a trick available to consumer platforms, which only support one socket.

    Leave a comment:


  • coder
    replied
    Originally posted by TemplarGR View Post
    Sure, lower clocked and slightly lower IPC, but you get 4 times the cores.
    So, I ran across this interesting performance comparison between Alder Lake's P and E cores. If you just look at the 8E-core (DDR5) and 8P-core (DDR5) cases, the P-cores deliver 2.10x/1.94x speedup over the E-cores on int/fp workloads, respectively:


    Source: https://www.anandtech.com/show/17047...d-complexity/9

    And the majority of that difference is coming from IPC. So, there's no "slightly" about it.

    Still, if the workload scales very well, I would take 4x the E-cores. But, you've also got to be tolerant of higher latency.

    Another point to consider is that 4 E-cores are likely going to use more power, on that benchmark, than a single P-core. That means a server CPU with only E-cores will need to clock them a bit lower, which will tip the performance picture a little further in the P-cores' direction. And scaling is never exactly linear, which again takes at least a small bite out of the hypothetical E-chip's advantage. The end result is a much narrower margin of victory for the E-only CPU than the hypothetical ~2x per mm^2.
    Last edited by coder; 25 November 2021, 04:57 AM.

    Leave a comment:


  • coder
    replied
    Originally posted by smitty3268 View Post
    Nothing really, just looking at the power use on current CPUs. 225W (or 280W) 64-core Zen 3 servers divides out to a pretty low number per core. Way less than an Alder Lake p-core, obviously.
    They run at different clock speeds and the interconnect overheads are different.

    Originally posted by smitty3268 View Post
    Off the top of my head, I think the 8 e-cores in Alder Lake were taking up to 10W each in Anandtech's testing? I'd have to look.
    Edit: Looks like it's more like 6W each. Efficiency-wise, that'd work out to around an ~8W Zen 3 core? About the same as a 32-core Zen 3 machine at 280W? Very rough spitballing there.
    The main way to measure efficiency is in perf/W. For that, you need a perf metric and the power figure for it.

    I did a little poking around, but didn't find one. It would be worth knowing how they compare. My guess is that Gracemont is more efficient than Zen3 on SPEC int workloads, but possibly less on vector/fp workloads.

    Originally posted by smitty3268 View Post
    They're already clocked reasonably low on Alder Lake, so it's possible there's not a ton more efficiency to pull out of them.
    Huh? You know efficiency gets worse as clock speed increases, right?


    Source: https://www.anandtech.com/show/16881...rchitectures/4

    Originally posted by smitty3268 View Post
    I'd argue pretty heavily that was a matter of Piledriver being really bad, rather than Zen 1 being all that great.
    I think you've lost the plot. The original claim (not yours) was that AMD's resurgence was due to Intel's process failings. However, looking at the relative performance of Piledriver vs. Zen1 shows what AMD was able to achieve even on an almost certainly worse node than Intel's.

    Originally posted by smitty3268 View Post
    It was Zen 2 when AMD really became competitive with Intel, in my opinion, and that did happen when they had a manufacturing advantage.
    Yeah, TSMC's 7 nm node really helped, but so did a generation of micro-architecture refinement.
    Last edited by coder; 02 July 2022, 06:04 PM.

    Leave a comment:


  • smitty3268
    replied
    Originally posted by coder View Post
    Source?
    Nothing really, just looking at the power use on current CPUs. 225W (or 280W) 64-core Zen 3 servers divides out to a pretty low number per core. Way less than an Alder Lake p-core, obviously.

    Off the top of my head, I think the 8 e-cores in Alder Lake were taking up to 10W each in Anandtech's testing? I'd have to look.
    Edit: Looks like it's more like 6W each. Efficiency-wise, that'd work out to around an ~8W Zen 3 core? About the same as a 32-core Zen 3 machine at 280W? Very rough spitballing there.

    But obviously that's probably worst case scenario and they'd be more efficient on a server chip. The question is exactly how much more efficient, and I don't think we know the answer to that. They're already clocked reasonably low on Alder Lake, so it's possible there's not a ton more efficiency to pull out of them.

    Going from Piledriver to Zen1, AMD achieved something like a 50% IPC increase. That's almost unheard of. And there was no process advantage at that time, either. It was 14 nm Glo Fo vs. 14 nm Intel. And Glo Fo's was almost certainly worse.
    I'd argue pretty heavily that was a matter of Piledriver being really bad, rather than Zen 1 being all that great. Compared to Intel, Zen 1 was still behind quite a bit in single threaded tasks, and it was only the addition of more cores and low prices that made them competitive. That was something Intel largely chose to do for business reasons, rather than a technical limitation.

    It was Zen 2 when AMD really became competitive with Intel, in my opinion, and that did happen when they had a manufacturing advantage.
    Last edited by smitty3268; 25 November 2021, 03:55 AM.

    Leave a comment:


  • smitty3268
    replied
    Originally posted by TemplarGR View Post
    Remember for example when the first Ryzen was introduced. It was bad in terms of IPC, it lacked proper fast AVX2, and it wasn't that power efficient, but AMD's approach was to offer "moar cores for the money".
    It's just good business to brand yourself the "good value" brand when you don't have the performance crown. I'm not sure why that would surprise anyone. As for trolls on the internet being stupid, well, there's plenty of that all over the place.

    Hell, even TODAY for most people in the desktop/mobile space 4 cores are enough.
    I would argue 6 cores is now the minimum acceptable amount on the desktop today, and I do give AMD a lot of credit for increasing that number. You're right that 4 was acceptable until very recently though.

    But no one called them out on being greedy like they screamed at the top of their lungs about Intel a couple of years earlier....
    I think you have a very selective memory.
    I remember a huge outcry when Zen 3 launched, and pretty much everyone was pissed at the price. The whole Zen 3 launch article on Phoronix got bogged down in a troll war by birdie and a few others claiming that AMD was too greedy and they'd never pay $300 for a CPU. There was plenty of that going on elsewhere too. A lot more than when Intel raises prices, because everyone expected that.

    And now that Intel's fabrication issues are becoming a thing of the past, suddenly AMD "remembered" the "moar cores for the money" approach. It seems AMD ALWAYS attempts to offer moar cores for the money when they lose at everything else.
    I mean, attempting to provide any value to your customers that you can is probably a good thing, right? I'm not sure how it's bad.

    That explains the 12core CCDs
    Going to more and more cores on server chips is inevitable, and Intel is doing the same thing. You yourself pointed out that with their new e-cores they can start putting a bunch more cores on the same chip. That's all AMD is doing too.
    Last edited by smitty3268; 25 November 2021, 03:46 AM.

    Leave a comment:


  • coder
    replied
    Originally posted by TemplarGR View Post
    Of course, they are a pro-profit company. That was my point. Then why are there here in Phoronix so many blatant AMD fanbois that refuse to see reality just to suck up to AMD? No one owes nothing to AMD.
    Linux has long been a sort of underdog, AMD is the underdog, so I dunno... kinda seems ripe for AMD support. FWIW, I have yet to buy a Zen-based machine, but there's probably one in my future.

    Originally posted by TemplarGR View Post
    Intel simply put has the best architecture, period. Their big.little approach is going to dominate all kinds of workloads in the future, and AMD is going to -again- have to copy-cat Intel in order to survive.
    Ahem. ARM was first to do Big.Little (in a big way, at least). They even coined the term. Sure, there aren't proper ARM-based desktop CPUs, so you can still say Intel brought it to the desktop.

    Originally posted by TemplarGR View Post
    As for Intel not having big.little in the server space, it is only a matter of time pal.
    Could be. They haven't announced anything. That's all I know. I think it's more likely we'll see big cloud CPUs with only E-cores, long before we see a hybrid server CPU. Not even ARM is doing such a thing, and they're best positioned to. They could even have one on the market today, if they thought it made sense.

    Originally posted by TemplarGR View Post
    Also i am pretty sure at some point Intel will introduce e-core only cpus to replace the atoms.
    ???

    Atom is Intel's brand name for server CPUs that use only E-cores. To date, their latest versions have Tremont cores, which are still a generation behind the Gracemont cores in Alder Lake. Obviously, that refresh is in the pipeline.

    Leave a comment:


  • coder
    replied
    Originally posted by smitty3268 View Post
    Intel's e-cores are really efficient compared to their p-cores. It's not nearly as impressive versus Zen 3 cores, though.
    Source?

    Originally posted by smitty3268 View Post
    AMD has done some genuinely interesting work on their architecture.
    Going from Piledriver to Zen1, AMD achieved something like a 50% IPC increase. That's almost unheard of. And there was no process advantage at that time, either. It was 14 nm Glo Fo vs. 14 nm Intel. And Glo Fo's was almost certainly worse.

    Originally posted by smitty3268 View Post
    The 128 core Bergamo Ryzen system is also meant to be e-cores,
    Bergamo will use a version of Zen4 cores that have smaller caches. I don't know if they're going to change anything else, but it'll be 16 cores/chiplet instead of 12 for normal Zen4. So, probably not a lot smaller.

    Leave a comment:


  • coder
    replied
    Originally posted by TemplarGR View Post
    1) E cores don't have to have the best absolute performance in any kind of load. 4 e-cores equal 1 p-core in die area. That means that instead of a 128 core Ryzen you can get in a theoretical scenario 512 Intel e-cores.
    That ratio only holds for Gracemont vs. Golden Cove. Zen 3 cores are significantly smaller than Golden Cove.

    Originally posted by TemplarGR View Post
    Sure, lower clocked and slightly lower IPC, but you get 4 times the cores. You need a heavily multi core system, remember? Which is going to be more efficient?
    How IPC compares depends on what workload. If floating point vector, there's nothing slight about the difference between them and Golden Cove.

    But sure, perf/mm^2 or perf/W are going to be much better with E-cores, in a highly-scalable, mostly-scalar/integer workload. Like I said, you could view these as comparable to ARM's N-series server cores.

    Leave a comment:


  • TemplarGR
    replied
    Originally posted by smitty3268 View Post
    I don't think AMD has ever said anything like that, sounds like you're more responding to fanboys than AMD itself. They really have no choice but to cash in whenever possible. Keeping prices low does nothing for them - they can't expand their marketshare any more, because they're already production limited, and being nice to people for the sake of being nice isn't very good business. Better for them to get money to fuel R&D while they can.
    AMD officially never said something like that. But that has always been the unofficial approach their fanbois/shills/trolls whatever want to call it had. Remember for example when the first Ryzen was introduced. It was bad in terms of IPC, it lacked proper fast AVX2, and it wasn't that power efficient, but AMD's approach was to offer "moar cores for the money". This lead to the narrative that AMD was "punishing Intel" and were the "people's champions" for "breaking Intel's 4 core barrier". How easy it is for people to forget i wonder?

    And it is not like Intel was actually greedy or "bad" or wanted to keep the desktop at 4 cores forever. Intel simply put didn't find any reason to stack the desktop with more cores at that moment in time while for the vast majority of desktop/gaming workloads 4 FAST cores were more than enough. Hell, even TODAY for most people in the desktop/mobile space 4 cores are enough.

    But that was AMD's communication environment approach, whether officially declared as such or not. Of course when Intel stumbled upon 10nm delays and Zen 2 and especially 3 managed to bridge the gap with Intel's IPC and efficiency, they changed their tune. Suddenly they didn't offer "moar cores for the money" anymore, and they kept the desktop at 8 cores instead of providing more. But no one called them out on being greedy like they screamed at the top of their lungs about Intel a couple of years earlier....

    And now that Intel's fabrication issues are becoming a thing of the past, suddenly AMD "remembered" the "moar cores for the money" approach. It seems AMD ALWAYS attempts to offer moar cores for the money when they lose at everything else. That explains the 12core CCDs

    Leave a comment:


  • TemplarGR
    replied
    Originally posted by coder View Post
    U mad, bro?

    They gotta make money while they can. They've always offered decent value for money, but mo' cores gonna cost mo' money. It's not their fault Intel couldn't scale up to as many cores.

    P.S. when did they ever do "pretending to be the pro-consumer company"? Some people like them because underdog, but I think you're projecting.
    Of course, they are a pro-profit company. That was my point. Then why are there here in Phoronix so many blatant AMD fanbois that refuse to see reality just to suck up to AMD? No one owes nothing to AMD.

    Intel simply put has the best architecture, period. Their big.little approach is going to dominate all kinds of workloads in the future, and AMD is going to -again- have to copy-cat Intel in order to survive.

    As for Intel not having big.little in the server space, it is only a matter of time pal. Of course it got introduced at the desktop/mobile space first, it is brand new, and even on Linux they can't fix the schedulers just yet. They wouldn't introduce something that needs much testing and refinement on the server side just yet. But don't delude yourself that Intel is not targeting the server space with this move. Also i am pretty sure at some point Intel will introduce e-core only cpus to replace the atoms.

    Leave a comment:

Working...
X