Originally posted by rabcor
View Post
Announcement
Collapse
No announcement yet.
AMD Announces Ryzen 7000 Series "Zen 4" Desktop CPUs - Linux Benchmarks To Come
Collapse
X
-
Last edited by artivision; 30 August 2022, 08:37 PM.
- Likes 1
-
Originally posted by coder View PostDon't forget that Apple's cores were designed, from the ground-up, with efficiency as the number 1 priority.
Also, Apple can afford to trade die size for energy-efficiency more so that AMD.
- Likes 2
Comment
-
Originally posted by rabcor View Posttoo bad we're in the middle of a transition from x86_64 to RISC/ARM architectures;
and my mid-range gaming laptop from 3 years ago is still completely solid for the latest games despite only having an 8th gen intel cpu in it and an rtx 2060.
There's very little reason for anyone who bought a pc in the last 6 years to upgrade if they bought a halfway decent one (maybe if you do CAD work or a lot of compiling or other such intense work then you should be persuaded, but otherwise in general not), unless it broke; so I'm kinda inclined to wait for the RISC/ARM laptops to become properly mainstream. Apple set a precedent, I'm honestly quite surprised that this transition isn't already in full swing yet (suspicious even).Last edited by Dukenukemx; 30 August 2022, 08:50 PM.
Comment
-
Originally posted by artivision View Post
Are you accepting new knowledge or you are retired from learning? Apple's cores are cut with two steps down lithography. Just lowering frequency by 20% (and some voltage) from the max possible of a core, you cut the power consumption by 50% from the max possible. When you also use a step down lithography that gives 20% less max frequency then you get that 20% cut in power consumption, making the core to consume 60% less for just 20% perf loss. If you cut two steps down you can have 64% the frequency (80%*80%) for 16% of the consumption (40%*40%). This is a 4x performance per power gain and everyone can do it. The only problem is that you need to order the specific woofers from TSMC and those woofers cannot be used for faster chips as the fast ones cannot be used for slower chips. If you take it in mind you will understand that AMD with chiplet design have another advantage because they can solder a 100F chiplet with an 80F one and a 64F one, consuming 100w + 40w +16w.
How is it that apple's M1 and M2 series chips have such an insanely much higher performance per watt than all existing competition? If it's really as simple as you say, why isn't there an intel or amd powered laptop that can match the M1 in performance per watt? that cna run at as low power with as high performance I mean I managed to dig this comparison up
M1-Max-Chart-AT-640x464.jpg
The M1 generally uses so insanely much less power with performance comparable to that high end gaming laptop, I mean it's using in a lot of cases less than half the power for the same results, that's one of the best mobile chips Intel has ever made, 20% less clock speed would reduce power draw by 50%? (Would it really tho? I don't really believe it's that simple; I've been messing with this stuff myself to get my laptop to draw less power, now in my testing i'm just limiting the frequency, I didn't go into the bios, and I didn't change the voltage, but well...
(There's maybe a 1w margin of error for these tests I ran; I just had a wattmeter and ran the same kind of cpu stress test for 'em)
800mhz: 21.8w
900mhz:22.2w
1ghz:22.2w
1.1ghz: 23.2w
1.2ghz:Missing (most likley ~24w)
1.3ghz: 25w
1.5ghz:26w
1.7ghz:27w
1.9ghz:29w
2.1ghz:31w
2.2ghz(core clock):33w
4.1ghz(boost): 76w
Now if you ask me, it seems fairly linear from the lowest I could set it to the core clock speed it seems that the power draw to clock speed correlation is mostly linear, but if what you said was true I would expect there to be a bigger power gap between the boost and core clock speed, certainly there's a trend where when your clock speed gets higher it starts costing more and more to increase it, but not quite so great as "50%" for the last 20%, in fact very very far from that.
Even at it's highest performance levels my cpu doesn't match the M1's performance, without turbo boost the power draw is certainly looking pretty decent though (my CB23MT score would be in the range of 6000, maybe even a smidge under it; didn't run it but I checked scores for my cpu) So even when running at 70+ watts my intel i7-8750H will deliver only half the performance the M1 does at 39W;
Now the i9-11980HK happens to be a huge improvement, twice the performance for only 10w more power draw; but that's still 2x more powerdraw than the M1 for relatively similar performance (A bit better granted, but not greatly)
I am accepting new knowledge certainly, but I'm not so sure if what oyu're telling me has much merit. The disparity is just too huge between the M1 and it's competition. Surely if it were as simple as you say, intel and AMD would be doing this too to compete with the M1s on power draw, they know how much it matters on the mobile device market.
- Likes 1
Comment
-
Originally posted by rabcor View Post
If it's that simple, why aren't AMD and Intel doing it? Why does it seem like ARM and RISC-V manufacturer's are the only ones doing this?
How is it that apple's M1 and M2 series chips have such an insanely much higher performance per watt than all existing competition? If it's really as simple as you say, why isn't there an intel or amd powered laptop that can match the M1 in performance per watt? that cna run at as low power with as high performance I mean I managed to dig this comparison up
M1-Max-Chart-AT-640x464.jpg
- Likes 2
Comment
-
Originally posted by birdie View Post
Less evil AMD starts ripping off their customers a lot more than Intel has ever done as soon as they have a competitive advantage.
From Sandy Bridge to Comet Lake AMD had nothing even remotely close in terms of performance and power efficiency. Did Intel raise their pricing? Hell no. A few bucks increases at most here and there. Intel did something clandestinely. AMD does it openly. Looks like in this case it's totally fine.
AMD64 and Ryzen 5000 CPUs on the other hand? Oh, boy, AMD welcomes fat margins as soon as they can.
3600 - $200 (Intel is still competitive)
5600X - $300, or a 50% price increase (Intel is not really competitive)
3700X - $330 (Intel is still competitive)
5800X - $450, or a 36% price increase (Intel is not really competitive)
Athlon 64 FX-57 was released at mind-bogglingly crazy $1,031! Athlon 64 X2 4800+ went for $1,001. People seem to have such short memory about their favourite underdog. It's always only Intel which is bad. F it. I'm so fucking tired of it.
And of course you will come up with excuses why only AMD can pull off such crap and why Intel and NVIDIA are the worst companies in the world if they do it.
At the end of the day people are either buying products or don't. From the pure economical point of view if people wishfully buy products with the raised price it means that price was too low to begin with. No need to cry about capitalistic reality.
- Likes 3
Comment
-
Originally posted by Dukenukemx View Post
Thanks for this, you've opened my eyes a bit to the possibility that RISC architectures might actually not be taking over and Intel and AMD are catching up;
I feel it's somewhat of a shocker though still that the M2 graphics are better than the ryzen 7 6800U's; certainly aint no RTX 3090 like apple claimed tho, but I mean AMD is a long time manufacturer of GPUs, they should have an overwhelming advantage in this area.
Comment
-
Originally posted by atomsymbolIt is invalid to compare power consumption of CPU cores that are manufactured using different process nodes (such as: 5nm and 7nm) or that are running at different frequencies (such as: 2.2GHz and 3GHz).
As for measuring at ISO-frequency, that makes sense if you really care about "IPC". However, IPC is only relevant in context. If a core is designed to clock higher, it will tend to have lower IPC but might still achieve better single-thread performance. And maybe that's what someone really cares about.
Originally posted by atomsymbolThe µop cache, as well as the loop stream buffer, were introduced to Intel CPUs as a feature that (primarily) saves power and (secondarily) delivers higher IPC. I seriously doubt that ARM or RISC-V (with or without µop cache) can defeat the power-efficiency of x86's µop cache by more than a very small margin.
BTW, the A715 is the first A7x-series core to drop AArch32 support. Maybe that's the real reason they no longer need a MOP cache. It does illustrate how a winning microarchitectural feature for one ISA doesn't necessarily pull its weight for another.Last edited by coder; 31 August 2022, 09:55 AM.
- Likes 1
Comment
-
Originally posted by Dukenukemx View PostI don't know how you can come to that conclusion, or how that even makes sense?
The other reason Apple can make larger cores is that they don't make CPUs with very many. If you're Intel or AMD, you need to worry about limiting the costs of your 56-core, 64-core, or 96-core CPUs. That puts downward pressure on core size, which means you need to clock them higher to deliver competitive performance. And that makes them less power-efficient.
- Likes 1
Comment
-
Originally posted by Dukenukemx View PostARM is mostly for mobile phones and tablets. ARM is very old, like Panasonic 3DO old. If it hasn't displaced the x86 market, it's safe to say it never will. What about PowerPC, as that was ready to displace x86?
Originally posted by Dukenukemx View PostNobody will ever transition towards ARM on desktops and laptops for the same reason why nobody on Android will use x86, because the majority of software is not going to be compatibile.
Originally posted by Dukenukemx View PostAs it stands right now a lot games on Mac OSX isn't updating since the transition to 64-bit only and ARM.
Comment
Comment