Announcement

Collapse
No announcement yet.

Apple Announces Its New M2 Processor

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by mangeek View Post

    I'm not an expert, but Apple's M1 is purpose-built and somewhat limited to consumer/prosumer use cases.
    Exactly, AMD has a Serverchip that they use for desktops, workstations, servers and supercomputers and a second design for laptops and embedded. All those chips need to be able to work with different motherboards and other devices and privide high performance in many different workloads. Apples chip is taylored to 3 different hardware configurations and a very limmited use case.

    Comment


    • Originally posted by sinepgib View Post
      "Need" is a very strong word. Nothing stops them from segmenting their products besides the binning stage.
      Just as ARM (as a family if designs) is segmented with different designs for different use cases, neither AMD nor Intel have anything getting im their way. Heck, both have very decent people working in CPU, GPU and IO designs.
      AMD took the one-chiplet-fits-all approach for Ryzen/Threadripper/Epyc (Zen 2) partly because they were in dire financial straits coming out of the Bulldozer era, and stuck in an unfavorable agreement with GlobalFoundries (so they combined TSMC 7nm chiplets with GlobalFoundries 14nm). The great yields were a bonus. Today they are doing much better than 5 years ago and can afford to experiment with more monolithic dies, and different types of chiplets like Zen 4C. Now they can also use 3D packaging for segmentation. All Zen 3 chiplets can be connected to 3D cache using TSVs.

      Intel has no obstacles to designing zillions of different dies, and they can produce them at their own fabs or competing fabs. And they can clearly target different segments with varying amounts of "Core" P-cores and "Atom" E-cores. Alder Lake-N will have no P-cores and 8 E-cores, for example.

      Comment


      • Originally posted by Anux View Post
        Exactly, AMD has a Serverchip that they use for desktops, workstations, servers and supercomputers and a second design for laptops and embedded. All those chips need to be able to work with different motherboards and other devices and privide high performance in many different workloads. Apples chip is taylored to 3 different hardware configurations and a very limmited use case.
        I find it hard not to read this cope as "well AMD just doesn't give a fuck about this market segment", which is OK but why so much effort to believe it's better? There's no law that says a company can only have one design for all their clients. Their consumer user base is much greater than that of Apple, so they probable have the cash to make two or three designs instead of a single one.
        Besides, that strategy failed Intel already when they tried to get on phones*, didn't it? Maybe they should start considering one size doesn't fit all?
        Regarding embedded, from what I've seen neither Intel nor AMD are really that successful in that field. Haven't seen an x86 router, cellphone, portable PoS (kiosks are pretty much underpowered desktops) in ages, and the only ones I've seen were 80186 family. I'm not sure their current designs can go low power enough. Even the ARM families we see in cellphones are often too much for tiny embedded, so there's the Cortex M family IIRC that acts more like microcontrollers.

        *I don't know the details tho, so if anyone can enlighten me about why it flopped, I'd be thankful. I kinda wanted to have an Intel phone at the time, but by the time I could actually buy a smartphone it had died already.

        Originally posted by piotrj3 View Post
        This. Nvidia kind of is comming tho with Grace, considering it is supposed to have 72 ARM v9 cores on single package.
        I don't think having 72 cores is what will make the difference for consumers. It could make a cool server tho, or maybe a workstation for certain demanding tasks. But for consumers you don't need that many general purpose CPUs, you need fewer CPUs and more of the rest of the system on die (controllers, GPU, now that people put ML in their soup it wouldn't be an entirely bad idea to put some TPUs, tho not a must IMO). SoCs existed since forever after all and the M1 is pretty much a well executed SoC.
        I'd also be worried about how much RAM you would need for that many cores to be useful. Say, 8GB of RAM would mean you have about 100MB available per parallel process if you want full utilization, as a bit of a quick and dirty metric. If you end up waiting for disk because everything is paging out your cores will be underutilized, i.e. pointless.

        Originally posted by WannaBeOCer View Post
        If that was the case then Ampere, MediaTek, Qualcomm and Nvidia would have already made an ARM based chip with better performance.
        But theirs were aimed for devices with a fraction of the power, weren't they?
        I'm not entirely skeptic to the difference being just packaging and process. But it's really not all that important: do we have that packaging and process today for the competition? Do they offer something in the same range of perf/watt at a decent perf? No? Call me when they do, I may be interested.
        People here is too accustomed to thinking like IT and should put the consumer hat from time to time. The consumer gives exactly 0 shits about whether the ISA is elegant, the process is small or the system is packed in a single die, they care about what they can experience. One model is giving them a relatively fast laptop without noisy fans, with virtually no overheating despite that and with a battery that lasts ages longer than anything they've used. Coulda shoulda is something that technical people care about, not consumers. Hypothetical ways for Intel and AMD to get there is something Intel's and AMD's engineers care about, not the user.

        Comment


        • Originally posted by jaxa View Post
          AMD took the one-chiplet-fits-all approach for Ryzen/Threadripper/Epyc (Zen 2) partly because they were in dire financial straits coming out of the Bulldozer era, and stuck in an unfavorable agreement with GlobalFoundries (so they combined TSMC 7nm chiplets with GlobalFoundries 14nm). The great yields were a bonus. Today they are doing much better than 5 years ago and can afford to experiment with more monolithic dies, and different types of chiplets like Zen 4C. Now they can also use 3D packaging for segmentation. All Zen 3 chiplets can be connected to 3D cache using TSVs.

          Intel has no obstacles to designing zillions of different dies, and they can produce them at their own fabs or competing fabs. And they can clearly target different segments with varying amounts of "Core" P-cores and "Atom" E-cores. Alder Lake-N will have no P-cores and 8 E-cores, for example.
          Fair enough.

          Comment


          • Originally posted by Developer12 View Post
            No, I mean literal performance. Instructions completed per second.
            Are you talking about ISA instructions or micro-ops? Because if it's the former, one ARM instruction is probably worth 1/5 of a x86 ISA instruction due to the stupid RISC ISA.

            Originally posted by Developer12 View Post
            Even if you go all the way back the the pentium Pro and a Sun Ultra 5, the pentium has twice as many transistors to reach even close to the same level of performance. The difference between them, or between modern x86 chips and the M1, is that you burn too many transistors on pipeline control and instruction decoding and instruction caching when implementing the x86 ISA.
            No it's negligible and offset by the fact they do more each. Even the micro-ops are fused these days (even on ARM, which they have to do extra work for).

            Comment


            • Originally posted by sinepgib View Post

              I find it hard not to read this cope as "well AMD just doesn't give a fuck about this market segment", which is OK but why so much effort to believe it's better?
              Not shure where I said that AMD doesn't care for that segment because they offer low power notebookchips. If someone wanted they could probably build a custom soc like the apple one (see consoles). Also I never said that one is better, which one and how do you define better?

              There's no law that says a company can only have one design for all their clients. Their consumer user base is much greater than that of Apple, so they probable have the cash to make two or three designs instead of a single one.
              There are the laws of physics and capitalism. AMD doesn't build smartphones or laptops, they build general purpose CPUs and sell them to many different companies that build stuff with it.
              Selling CPUs to manufracturers gives much less profit than selling a finished product to the enduser while trippling the price of every upgrade and accessory. Just look at the money that Apple and AMD get each year and tell me how AMD should be able to design an additional chip that would cost a few billions without knowing if they can sell it to anyone?

              Besides, that strategy failed Intel already when they tried to get on phones*, didn't it?
              Jepp like I mentioned the ATOM was build on old processes to reduce cost and therfore couldn't compete with ARMs on the newest process, also there was a big ecosystem for ARM apps that diddn't run on x86 which may have been the bigger problem (see windows phone).

              Regarding embedded, from what I've seen neither Intel nor AMD are really that successful in that field. Haven't seen an x86 router, cellphone, portable PoS (kiosks are pretty much underpowered desktops) in ages
              What exactly is your point here? My friend had a smartphone with ATOM (it was garbage). I haven't seen an Apple chip in a router or other embedded devices either. And the chip in the Applewatch is a competly different design and not at all compareable to the M1.

              Edit:
              Originally posted by Weasel View Post
              Because if it's the former, one ARM instruction is probably worth 1/5 of a x86 ISA instruction due to the stupid RISC ISA.
              I call bullshit. Whats your method of messuring the "worth" of an instruction?
              Last edited by Anux; 09 June 2022, 10:04 AM.

              Comment


              • Originally posted by sinepgib View Post
                But theirs were aimed for devices with a fraction of the power, weren't they?
                I'm not entirely skeptic to the difference being just packaging and process. But it's really not all that important: do we have that packaging and process today for the competition? Do they offer something in the same range of perf/watt at a decent perf? No? Call me when they do, I may be interested.
                People here is too accustomed to thinking like IT and should put the consumer hat from time to time. The consumer gives exactly 0 shits about whether the ISA is elegant, the process is small or the system is packed in a single die, they care about what they can experience. One model is giving them a relatively fast laptop without noisy fans, with virtually no overheating despite that and with a battery that lasts ages longer than anything they've used. Coulda shoulda is something that technical people care about, not consumers. Hypothetical ways for Intel and AMD to get there is something Intel's and AMD's engineers care about, not the user.
                I brought up those semiconductors since we haven’t seen any of them touch the single threaded performance of even Apple’s A13.

                Qualcomm is the only ARM semiconductor that’s trying to compete against Apple’s laptop/desktop line with their 8cx. As I mentioned earlier Qualcomm can’t compete with their own architecture which is why they purchased Nuvia. We won’t see competitive products from Intel/AMD until 2024.
                Last edited by WannaBeOCer; 09 June 2022, 10:36 AM.

                Comment


                • Originally posted by Anux View Post
                  Not shure where I said that AMD doesn't care for that segment because they offer low power notebookchips. If someone wanted they could probably build a custom soc like the apple one (see consoles). Also I never said that one is better, which one and how do you define better?
                  You didn't say it, but the fact they do one-size-fits-all speaks for itself. It says, at least, "this market is not worth actively targetting". Maybe it's a matter of incentives: Apple makes only consumer devices (I'm counting the somewhat beefier Mac Minis tho), so it certainly is worth focusing as it is all the market they have.

                  The "which one is better" is pretty much all the thread discusses, I didn't check if you specifically supported that thesis.
                  The definition of better I use is how well a product fulfills its mission, how it meets the requirements of the task at hand. This makes it intrinsically relative.
                  So, x86 seems to definitely be better for general purpose, indeed the only place where it never really took off (not counting the first years of course) is really low power embedded. For everything else it's quite appropriate, even if not the best for some.
                  But for consumer laptops specifically Apple seems quite better, and ARM in general for embedded and phones. Why? Because your priorities there are to have a decent performance baseline (CHECKED), a moderate heat dissipation (CHECKED) and long battery life, and in all those aspects it beats the current x86 chips in the market. Is it possible that it's not the CPU design per se that makes the difference? Yes, of course. But being pragmatic, if it's not in the market it's only a nice theory.

                  Originally posted by Anux View Post
                  There are the laws of physics and capitalism. AMD doesn't build smartphones or laptops, they build general purpose CPUs and sell them to many different companies that build stuff with it.
                  Laws of physics and capitalism don't stop you from selling SoCs to laptop manufacturers and powerful but power hungry chips for desktops and servers. The laws of physics clearly didn't stop Apple and last time I checked the same laws apply to everyone. The laws of capitalism pretty much say "if you can profit it, it can be done". And again, Apple sells well and makes profit.

                  Originally posted by Anux View Post
                  Selling CPUs to manufracturers gives much less profit than selling a finished product to the enduser while trippling the price of every upgrade and accessory. Just look at the money that Apple and AMD get each year and tell me how AMD should be able to design an additional chip that would cost a few billions without knowing if they can sell it to anyone?
                  While horizontal integration definitely creates more revenue, isn't the fact the M1 is a commercial success enough evidence that it will have a market? Besides, most economic liberals will always claim that the reason capitalists get the most of the companies' earnings is because they take the risks.
                  Certainly Apple made much more than AMD, it even quadruples Intel's revenue. But then again, they use a handful of engineers and it wouldn't make a dent on that profit. They have the know how, they're not stupid.
                  Intel already designs mainboards BTW, don't those count as (almost) complete devices?

                  Originally posted by Anux View Post
                  Jepp like I mentioned the ATOM was build on old processes to reduce cost and therfore couldn't compete with ARMs on the newest process
                  Didn't Intel had the lead in term of process at the time? Why use the old one? If it was cost effective for ARM, what made it different for Intel?

                  Originally posted by Anux View Post
                  also there was a big ecosystem for ARM apps that diddn't run on x86 which may have been the bigger problem (see windows phone).
                  Most of it was Java, wasn't it? Either Java's promise didn't hold (likely) or the ecosystem should have been (mostly) portable. But I agree on the bottomline: smartphones without apps are but fancy toys.

                  Originally posted by Anux View Post
                  What exactly is your point here? My friend had a smartphone with ATOM (it was garbage). I haven't seen an Apple chip in a router or other embedded devices either. And the chip in the Applewatch is a competly different design and not at all compareable to the M1.
                  Not an Apple chip of course but ARM. Even MIPS. The point of the comparison was that evidently the one size fits all didn't fit that market. In the particular case of ARM, what made it suitable is probably being just IP cores that made them flexible for use in higher level designs based on them (i.e. the ability to make the SoCs further down the chain) rather than being discrete units as most x86 are.
                  Apple chips aren't appropriate for that kind of embedded either of course. But they don't aim for one size fits all and they don't intend to cover such spaces. They don't try to take a server optimized chip and use it everywhere.

                  Comment


                  • Originally posted by WannaBeOCer View Post
                    I brought up those semiconductors since we haven’t seen any of them touch the single threaded performance of even Apple’s A13.

                    Qualcomm is the only ARM semiconductor that’s trying to compete against Apple’s laptop/desktop line with their 8cx. As I mentioned earlier Qualcomm can’t compete with their own architecture which is why they purchased Nuvia. We won’t see competitive products from Intel/AMD until 2024.
                    I see. I sincerely hope AMD/Intel makes something comparable to the M1 at some point. While I do like the M1 (as I said, I had a brilliant experience), I don't like MacOS and, despite the valiant efforts from the people doing Asahi Linux, most Mac users are actually happy with MacOS, so it'll remain a less Linux friendly alternative and a rather niche Linux user (which means harder to find answers for hardware specific issues), and it'll also take a long time for it to be fully supported. I wouldn't buy an M1/M2/whatever precisely for that reason (I bought a Lenovo last year anyway, and I generally don't change computers for about a decade).

                    Comment


                    • Originally posted by sinepgib View Post

                      I see. I sincerely hope AMD/Intel makes something comparable to the M1 at some point. While I do like the M1 (as I said, I had a brilliant experience), I don't like MacOS and, despite the valiant efforts from the people doing Asahi Linux, most Mac users are actually happy with MacOS, so it'll remain a less Linux friendly alternative and a rather niche Linux user (which means harder to find answers for hardware specific issues), and it'll also take a long time for it to be fully supported. I wouldn't buy an M1/M2/whatever precisely for that reason (I bought a Lenovo last year anyway, and I generally don't change computers for about a decade).
                      As ravyne explained earlier, you are unlikely going to see something in the same ballpark as M1 using x86/84. AMD at one point was experimenting with an ARM processor (not sure what happened to it?) so maybe we will something out of that.

                      Comment

                      Working...
                      X