Announcement

Collapse
No announcement yet.

Linux 5.16's New Cluster Scheduling Is Causing Regression, Further Hurting Alder Lake

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • middy
    replied
    Originally posted by arQon View Post

    Pretty much, yeah. There isn't really anything fundamentally "wrong" with the concept, but it's far from unreasonable to say that you have no interest at all in running gimped cores to save a miniscule amount of power, when all that silicon could have gone to larger caches etc instead.

    Even the now-ancient SpeedStep/EIST/etc already provides analogous behavior: idle cores are cut down to half speed or less, with a corresponding reduction in power draw, and completely idle cores can even power-gate - but they're ALL still *capable* of actually performing well, unlike the E-cores. On a *desktop* CPU I'd much rather see improvements in the gating etc than have half the cost of my CPU going to pay for garbage cores that are literally useless for half the things I want a PC to do.
    from seeing all the benchmarks for alder lake, i concluded intel is only pushing the e-core thing because they can't scale up their full size cores to high core counts with sane power consumption. e-cores allows them to do that. i really don't think they had power consumption with mind as their focus for it. i really think it was just score scaling as the driving factor. e-cores is allowing them to slap bigger numbers on the box for core counts.

    i just hope this doesn't make intel feel content with not working hard to make their normal cores power efficient and use "e-cores" as an excuse to slack off.

    Leave a comment:


  • Mike Frett
    replied
    Looks like one big cluster f*ck.

    Leave a comment:


  • avem
    replied
    Originally posted by mSparks View Post

    reasonably sure p cores are waaay more efficient than e cores when run at the same frequency.
    its a gimmick calling them 12 core cpus when they are basically a 6 core cpu with a raspberry pi or two strapped on.
    Multiple reviews show you're completely wrong but nowadays it's just fine to say whatever BS you want as long as it's called an opinion.

    Leave a comment:


  • mSparks
    replied
    Originally posted by avem View Post

    They are not gimmicky for Christ's sake. Their power efficiency is a lot higher than P-cores, which means they are an extremely good fit for heavy MT tasks. In fact it's been rumored that Intel wants to have more of them in Raptor Lake. And AMD has been rumored to have them in Zen 5.

    Of course if you don't care about MT performance you may not want them at all and Intel has got you covered, 12500, 12400 and other ADL CPUs won't have e-cores at all.
    reasonably sure p cores are waaay more efficient than e cores when run at the same frequency.
    its a gimmick calling them 12 core cpus when they are basically a 6 core cpu with a raspberry pi or two strapped on.

    Leave a comment:


  • grigi
    replied
    Do note that the die space of 1 Colden Cove is estimated to be about equivalent to 2 Milan(full Zen3) cores.
    And that the Renoir core (cache reduced zen2) is about half the size.

    Meaning 1 E core is about the size of a Renoir core, which makes it not that small.

    Leave a comment:


  • jaxa
    replied
    Originally posted by geearf View Post
    Why is that?
    Is it because of the cost on context switch and alike?
    If you run software that can use many cores, and the smaller cores offer more multi-threaded performance per unit of die area, then the answer to increasing is to keep on adding small cores. The big cores are there to handle tasks that can't be parallelized. 6-8 big cores is still plenty for most users.

    Alder Lake is 8+8 for the Core i9.
    Raptor Lake will have 8+16 for the Core i9.
    A future Lake (possibly Meteor or Arrow) is rumored to have 8+32 for the Core i9.

    The trend is clear. Maybe the big core count will also go up, so you could see 12+64 or 16+256 after a few generations.

    Leave a comment:


  • jaxa
    replied
    Originally posted by arQon View Post

    Pretty much, yeah. There isn't really anything fundamentally "wrong" with the concept, but it's far from unreasonable to say that you have no interest at all in running gimped cores to save a miniscule amount of power, when all that silicon could have gone to larger caches etc instead.

    Even the now-ancient SpeedStep/EIST/etc already provides analogous behavior: idle cores are cut down to half speed or less, with a corresponding reduction in power draw, and completely idle cores can even power-gate - but they're ALL still *capable* of actually performing well, unlike the E-cores. On a *desktop* CPU I'd much rather see improvements in the gating etc than have half the cost of my CPU going to pay for garbage cores that are literally useless for half the things I want a PC to do.
    It's not just about power reduction. The die space of 1 P-core is about 4 E-cores. So you can either have an additional 2 P-cores, or 8 E-cores. Intel believes that you will get more multi-threaded performance from the 8 E-cores. If everything is working properly, an 8+8 i9-12900K would beat a hypothetical 10 P-core (Golden Cove). This could actually be tested by disabling cores. Disable two P-cores and test a 6+8 configuration, then disable eight E-cores and test 8+0. If 6+8 tends to beat 8+0, then it shows that the Alder Lake approach makes sense.

    If you don't want to pay for E-cores, Intel has got you covered (for this generation at least) with the smaller die that will be used in several CPUs including a 6-core i5-12400, quad-core i3-12300 and i3-12100, and apparently a dual-core Pentium G7400. If your PC is only doing lightly threaded tasks, you could be just fine with even the Pentium. Those CPUs will probably be announced in January at CES 2022.

    Leave a comment:


  • geearf
    replied
    Originally posted by jaxa View Post
    the best way to improve multi-threaded performance is to add more small cores. Meaning dozens, hundreds, or even thousands of them (server).
    Why is that?
    Is it because of the cost on context switch and alike?

    Leave a comment:


  • yump
    replied
    Can somebody (perhaps Michael) with access to an Alder Lake box and a fresh kernel run:

    Code:
    grep . /sys/devices/system/cpu/cpu*/cpu_capacity
    so we can see if Intel actually got around to hooking their shiny new CPU up to the proper infrastructure?

    Leave a comment:


  • arQon
    replied
    Originally posted by lethalwp View Post
    i wonder, since it s a desktop, should nt we just ignore E cores and go full perfs?
    Pretty much, yeah. There isn't really anything fundamentally "wrong" with the concept, but it's far from unreasonable to say that you have no interest at all in running gimped cores to save a miniscule amount of power, when all that silicon could have gone to larger caches etc instead.

    Even the now-ancient SpeedStep/EIST/etc already provides analogous behavior: idle cores are cut down to half speed or less, with a corresponding reduction in power draw, and completely idle cores can even power-gate - but they're ALL still *capable* of actually performing well, unlike the E-cores. On a *desktop* CPU I'd much rather see improvements in the gating etc than have half the cost of my CPU going to pay for garbage cores that are literally useless for half the things I want a PC to do.

    Leave a comment:

Working...
X