Announcement

Collapse
No announcement yet.

Linux Prepares For Next-Gen AMD CPUs With Up To 12 CCDs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #81
    Originally posted by coder View Post
    Huh? You know efficiency gets worse as clock speed increases, right?
    Uh, yes? I think you must have misunderstood me, because that was my whole point.

    Usually server chips are clocked a bit lower than the desktop equivalents, because it's much more important for them to be more power efficient. The desktop side can afford to clock higher into the performance/efficiency power curve, so that they can get the fastest possible performance.

    My point was it's not clear whether that will apply to Intel's e-cores or not. They're already clocked low on the desktop chip in order to be efficient. Will they be even lower on a future server chip? I have no idea.
    Last edited by smitty3268; 27 November 2021, 07:04 PM.

    Comment


    • #82
      Originally posted by smitty3268 View Post
      My point was it's not clear whether that will apply to Intel's e-cores or not. They're already clocked low on the desktop chip in order to be efficient. Will they be even lower on a future server chip? I have no idea.
      Although this is only one benchmark, it gives us a number to work with:


      The green bars are E-cores only. With 8 E-cores, we get package power of 48 W. If you divide it evenly, that's 6 W per core. However, we could also subtract off some overhead to get a better idea about how actual core power would scale. Just taking the 1-core and 8-core data points, I get a per-core power of about 4.7 W + 10.3 W of overhead.

      If we scale that up to a 280 W package, which seems at the upper range of what server CPUs are using, these days, it only nets us 57 cores. Obviously, that's not the sort of core-count you'd target with a pure E-core server CPU. So, clocks would have to be scaled back by a nontrivial amount, at least during such a workload.

      Comment


      • #83
        Originally posted by TemplarGR View Post
        Not that exciting. So apparently AMD will target for more performance-heavy cores. But this is inefficient for many reasons. Use cases that can scale with many cores are typically better off with multiple efficient cores than with fewer more performing ones, and applications that don't scale and instead benefit from few very powerful cores, don't need too many performance cores. Adding more performance cores is not as effective as Intel's strategy with Alder Lake and especially its successors. It is better to keep performance cores limited in number and just add multiple efficient cores. Intel gets 4 efficiency cores for every performance core they replace on the die, and it is not like the E-cores are that weak by themselves. This is an excellent approach and when schedulers mature will dominate the market i believe. I think we will also see Hyperthreading removed from Intel p-cores in the future, as it makes no sense when you have so many e-cores. This will simplify the p-cores and allow even more performance out of them.

        So what AMD expects from this move to 12 CCDs? To move their product line to more cores per price point? To sell 12 core cpus to the (mainstream) desktop? This won't make much difference, honestly. They are going to keep sacrificing per-core performance for some more cores. It is not a bad improvement but Intel is going to eat their lunch. I think we are witnessing a repeat of the AMD64 days. AMD did dominate the market for a few years based on AMD64, but Intel came back with Core/Core 2 and AMD had nothing but just dual core AMD64s to compensate. I think we are at the stage of the original Core Duo (=Alder Lake). Alder Lake's successor is going to be the Core 2 moment (and a reminder here that that successor is what Zen 4 is going to face on the market since it won't come out any time soon).
        If Intel CPU's didn't have shit power efficiency as of late, you would have a point. I mean for desktop (where power efficiency doesn't really matter) than this stance can be valid, but if we are talking about cloud/datacenter assuming close to ~100% utilization, AMD's is about twice power efficient as as Intel.

        Furthermore that claim about scheduling is a big IF, because from what has been shown in testing the Intel scheduler is quite shitty ATM (at least on Windows 11) as shown by Steve from Gamers Nexus when he benchmarked Windows 11 vs 10. Even from a technical programming perspective, it would seem you would get best results from actually giving programmers explicit control on which cores to execute which code, which isn't how the scheduler is currently designed. I frankly cannot see how Intel's scheduler can "automagically" solve this problem to an degree that makes a meaningful difference because one thing is for sure, Alder lake hasn't shown this.

        Comment

        Working...
        X