Originally posted by mangeek
View Post
Announcement
Collapse
No announcement yet.
Apple Announces Its New M2 Processor
Collapse
X
-
- Likes 1
-
Originally posted by sinepgib View Post"Need" is a very strong word. Nothing stops them from segmenting their products besides the binning stage.
Just as ARM (as a family if designs) is segmented with different designs for different use cases, neither AMD nor Intel have anything getting im their way. Heck, both have very decent people working in CPU, GPU and IO designs.
Intel has no obstacles to designing zillions of different dies, and they can produce them at their own fabs or competing fabs. And they can clearly target different segments with varying amounts of "Core" P-cores and "Atom" E-cores. Alder Lake-N will have no P-cores and 8 E-cores, for example.
- Likes 1
Comment
-
Originally posted by Anux View PostExactly, AMD has a Serverchip that they use for desktops, workstations, servers and supercomputers and a second design for laptops and embedded. All those chips need to be able to work with different motherboards and other devices and privide high performance in many different workloads. Apples chip is taylored to 3 different hardware configurations and a very limmited use case.
Besides, that strategy failed Intel already when they tried to get on phones*, didn't it? Maybe they should start considering one size doesn't fit all?
Regarding embedded, from what I've seen neither Intel nor AMD are really that successful in that field. Haven't seen an x86 router, cellphone, portable PoS (kiosks are pretty much underpowered desktops) in ages, and the only ones I've seen were 80186 family. I'm not sure their current designs can go low power enough. Even the ARM families we see in cellphones are often too much for tiny embedded, so there's the Cortex M family IIRC that acts more like microcontrollers.
*I don't know the details tho, so if anyone can enlighten me about why it flopped, I'd be thankful. I kinda wanted to have an Intel phone at the time, but by the time I could actually buy a smartphone it had died already.
Originally posted by piotrj3 View PostThis. Nvidia kind of is comming tho with Grace, considering it is supposed to have 72 ARM v9 cores on single package.
I'd also be worried about how much RAM you would need for that many cores to be useful. Say, 8GB of RAM would mean you have about 100MB available per parallel process if you want full utilization, as a bit of a quick and dirty metric. If you end up waiting for disk because everything is paging out your cores will be underutilized, i.e. pointless.
Originally posted by WannaBeOCer View PostIf that was the case then Ampere, MediaTek, Qualcomm and Nvidia would have already made an ARM based chip with better performance.
I'm not entirely skeptic to the difference being just packaging and process. But it's really not all that important: do we have that packaging and process today for the competition? Do they offer something in the same range of perf/watt at a decent perf? No? Call me when they do, I may be interested.
People here is too accustomed to thinking like IT and should put the consumer hat from time to time. The consumer gives exactly 0 shits about whether the ISA is elegant, the process is small or the system is packed in a single die, they care about what they can experience. One model is giving them a relatively fast laptop without noisy fans, with virtually no overheating despite that and with a battery that lasts ages longer than anything they've used. Coulda shoulda is something that technical people care about, not consumers. Hypothetical ways for Intel and AMD to get there is something Intel's and AMD's engineers care about, not the user.
Comment
-
Originally posted by jaxa View PostAMD took the one-chiplet-fits-all approach for Ryzen/Threadripper/Epyc (Zen 2) partly because they were in dire financial straits coming out of the Bulldozer era, and stuck in an unfavorable agreement with GlobalFoundries (so they combined TSMC 7nm chiplets with GlobalFoundries 14nm). The great yields were a bonus. Today they are doing much better than 5 years ago and can afford to experiment with more monolithic dies, and different types of chiplets like Zen 4C. Now they can also use 3D packaging for segmentation. All Zen 3 chiplets can be connected to 3D cache using TSVs.
Intel has no obstacles to designing zillions of different dies, and they can produce them at their own fabs or competing fabs. And they can clearly target different segments with varying amounts of "Core" P-cores and "Atom" E-cores. Alder Lake-N will have no P-cores and 8 E-cores, for example.
Comment
-
Originally posted by Developer12 View PostNo, I mean literal performance. Instructions completed per second.
Originally posted by Developer12 View PostEven if you go all the way back the the pentium Pro and a Sun Ultra 5, the pentium has twice as many transistors to reach even close to the same level of performance. The difference between them, or between modern x86 chips and the M1, is that you burn too many transistors on pipeline control and instruction decoding and instruction caching when implementing the x86 ISA.
Comment
-
Originally posted by sinepgib View Post
I find it hard not to read this cope as "well AMD just doesn't give a fuck about this market segment", which is OK but why so much effort to believe it's better?
There's no law that says a company can only have one design for all their clients. Their consumer user base is much greater than that of Apple, so they probable have the cash to make two or three designs instead of a single one.
Selling CPUs to manufracturers gives much less profit than selling a finished product to the enduser while trippling the price of every upgrade and accessory. Just look at the money that Apple and AMD get each year and tell me how AMD should be able to design an additional chip that would cost a few billions without knowing if they can sell it to anyone?
Besides, that strategy failed Intel already when they tried to get on phones*, didn't it?
Regarding embedded, from what I've seen neither Intel nor AMD are really that successful in that field. Haven't seen an x86 router, cellphone, portable PoS (kiosks are pretty much underpowered desktops) in ages
Edit:
Originally posted by Weasel View PostBecause if it's the former, one ARM instruction is probably worth 1/5 of a x86 ISA instruction due to the stupid RISC ISA.Last edited by Anux; 09 June 2022, 10:04 AM.
Comment
-
Originally posted by sinepgib View PostBut theirs were aimed for devices with a fraction of the power, weren't they?
I'm not entirely skeptic to the difference being just packaging and process. But it's really not all that important: do we have that packaging and process today for the competition? Do they offer something in the same range of perf/watt at a decent perf? No? Call me when they do, I may be interested.
People here is too accustomed to thinking like IT and should put the consumer hat from time to time. The consumer gives exactly 0 shits about whether the ISA is elegant, the process is small or the system is packed in a single die, they care about what they can experience. One model is giving them a relatively fast laptop without noisy fans, with virtually no overheating despite that and with a battery that lasts ages longer than anything they've used. Coulda shoulda is something that technical people care about, not consumers. Hypothetical ways for Intel and AMD to get there is something Intel's and AMD's engineers care about, not the user.
Qualcomm is the only ARM semiconductor that’s trying to compete against Apple’s laptop/desktop line with their 8cx. As I mentioned earlier Qualcomm can’t compete with their own architecture which is why they purchased Nuvia. We won’t see competitive products from Intel/AMD until 2024.Last edited by WannaBeOCer; 09 June 2022, 10:36 AM.
- Likes 2
Comment
-
Originally posted by Anux View PostNot shure where I said that AMD doesn't care for that segment because they offer low power notebookchips. If someone wanted they could probably build a custom soc like the apple one (see consoles). Also I never said that one is better, which one and how do you define better?
The "which one is better" is pretty much all the thread discusses, I didn't check if you specifically supported that thesis.
The definition of better I use is how well a product fulfills its mission, how it meets the requirements of the task at hand. This makes it intrinsically relative.
So, x86 seems to definitely be better for general purpose, indeed the only place where it never really took off (not counting the first years of course) is really low power embedded. For everything else it's quite appropriate, even if not the best for some.
But for consumer laptops specifically Apple seems quite better, and ARM in general for embedded and phones. Why? Because your priorities there are to have a decent performance baseline (CHECKED), a moderate heat dissipation (CHECKED) and long battery life, and in all those aspects it beats the current x86 chips in the market. Is it possible that it's not the CPU design per se that makes the difference? Yes, of course. But being pragmatic, if it's not in the market it's only a nice theory.
Originally posted by Anux View PostThere are the laws of physics and capitalism. AMD doesn't build smartphones or laptops, they build general purpose CPUs and sell them to many different companies that build stuff with it.
Originally posted by Anux View PostSelling CPUs to manufracturers gives much less profit than selling a finished product to the enduser while trippling the price of every upgrade and accessory. Just look at the money that Apple and AMD get each year and tell me how AMD should be able to design an additional chip that would cost a few billions without knowing if they can sell it to anyone?
Certainly Apple made much more than AMD, it even quadruples Intel's revenue. But then again, they use a handful of engineers and it wouldn't make a dent on that profit. They have the know how, they're not stupid.
Intel already designs mainboards BTW, don't those count as (almost) complete devices?
Originally posted by Anux View PostJepp like I mentioned the ATOM was build on old processes to reduce cost and therfore couldn't compete with ARMs on the newest process
Originally posted by Anux View Postalso there was a big ecosystem for ARM apps that diddn't run on x86 which may have been the bigger problem (see windows phone).
Originally posted by Anux View PostWhat exactly is your point here? My friend had a smartphone with ATOM (it was garbage). I haven't seen an Apple chip in a router or other embedded devices either. And the chip in the Applewatch is a competly different design and not at all compareable to the M1.
Apple chips aren't appropriate for that kind of embedded either of course. But they don't aim for one size fits all and they don't intend to cover such spaces. They don't try to take a server optimized chip and use it everywhere.
Comment
-
Originally posted by WannaBeOCer View PostI brought up those semiconductors since we haven’t seen any of them touch the single threaded performance of even Apple’s A13.
Qualcomm is the only ARM semiconductor that’s trying to compete against Apple’s laptop/desktop line with their 8cx. As I mentioned earlier Qualcomm can’t compete with their own architecture which is why they purchased Nuvia. We won’t see competitive products from Intel/AMD until 2024.
Comment
-
Originally posted by sinepgib View Post
I see. I sincerely hope AMD/Intel makes something comparable to the M1 at some point. While I do like the M1 (as I said, I had a brilliant experience), I don't like MacOS and, despite the valiant efforts from the people doing Asahi Linux, most Mac users are actually happy with MacOS, so it'll remain a less Linux friendly alternative and a rather niche Linux user (which means harder to find answers for hardware specific issues), and it'll also take a long time for it to be fully supported. I wouldn't buy an M1/M2/whatever precisely for that reason (I bought a Lenovo last year anyway, and I generally don't change computers for about a decade).
Comment
Comment