Announcement

Collapse
No announcement yet.

Apple Announces Its New M2 Processor

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #91
    Originally posted by tunnelblick View Post
    The M2 MBA is, again, fan-less. What else is there to say? Wake me up when AMD or Intel come up with a chip that can be cooled with a heat pipe and delivers the same performance the M1/2 offers.
    You can dislike Apple and their OS or their philosophy or whatever but their CPUs are great.
    This.
    I hate Apple so much but still I use an 13'3 M1 Pro and this is the best laptop I have ever bought.
    Not once I have heard its fan.
    While my Ryzen 3 laptop made me want to kill myself. Had to use it unplugged to make sure the CPU would be throttled, which would make the user experience average.

    You can all throw your fancy copium numbers around, but reality is M1 design is vastly superior from a perf/watts perspective. And this should be one of the main design goal for laptops given their usage.

    Comment


    • #92
      Originally posted by HEL88 View Post

      So, ARM is bad ISA too, because Cortex A77, A78, X1, X2, Neoverse-V1, Neoverse-V2, Neoverse-N2, every top power ARM procesor since 2019 have uOP cache like x86.

      A78, X1, X2, V1, V2 have big 3000 uOP like Zen2/3 other (A77, N2) have smaller 1500 uOP.
      There's a difference between a uop cache and an instruction decoding cache. Much as intel/amd like to muddy the water with a "but our CPUs are risc on the inside!" tagline.

      The micro-op cache holds decoded uops that have been decoded already, waiting for reordering. The instructions decode cache stores translation mappings between instructions and the resulting uops. At any rate, for those chips that *do* feature some kind of instruction translation cache, you might actually want to check how much die area it consumes.

      Comment


      • #93
        Originally posted by Developer12 View Post

        I'm speaking in general. Even as early as the pentium pro and the Sun Ultra 5 the pentium needed double the transistors for it's instruction decoding and control. It's "risc-like-core" didn't save it that embarrassment.
        Oh. So you are comparing UltraSPARC IIi with Pentium II probably. Both from 1997. Hmm, all right so pentium required a bit more energy to have the same performance. But this is nearly 30 years ago. And where do you see USparc now? And what's their price/perf?

        So yes, I very much like SPARC, but honestly it lost. The question will be if x86/x64 is going to lose to ARM/RISC-V or not.

        Comment


        • #94
          Originally posted by mangeek View Post

          I've been saying this for a while, but if Intel made a similar consumer CPU package that had 16GB of very high speed RAM next to the processor on one bus, and then had any additional/user-added RAM hang off of a CXL link, I think they'd sell like hotcakes and reduce their costs. They could make one part that covered 90% of desktop/laptop use cases, maybe laser-off half of the RAM or a few cores for the 'low end' models.

          I really don't think Intel is doing themselves any favors by making 80+ flavors of Alder Lake fore every use case. Just make the one I mentioned above for 'casual computing' (with a handful of clock limits to stratify the market) and call them "Evo '22 [7/5/3]"
          They are making just 3 die types, not more with ADL.

          Comment


          • #95
            Originally posted by Developer12 View Post

            X86 chips still pay the price despite all the instruction caching they claim. There's no free lunch for having a bad ISA. That caching is of limited size, consumes massive amounts of die area in addition to the decoding circuitry, and the ISA still imposes a low limit on how quickly you can decode *new* code while following program execution. Since the dawn of pentium, x86 has always spent more than double the number of transistors to achieve the same performance.
            Yes, sure, x64 is paying the price indeed. And so what? ARM is paying the price too although not so big. Every OoOE CPU is paying translation price. The question is price/energy/perf/availability. Here Apple lose due to 'availability'. Let's see how they new Mac Pro will look like...

            Comment


            • #96
              Originally posted by Developer12 View Post
              You can run a full copy of debian on the M1, today, and the only thing that would affect benchmarks is the lack of power management. Yet the benchmarks largely come out the same between macOS and linux.
              Well, that ultimately proved the point I was trying to make. So, while I wasn't aware you could daily drive a M1 (part of me questions how usable it really is...), my point was the Linux isn't going to have such a distinct lead once you get a full blown desktop running on it.

              Comment


              • #97
                Originally posted by Anux View Post
                ...how was Apple able to close the performance gap of ARM in 7 years...
                I'm not an expert, but Apple's M1 is purpose-built and somewhat limited to consumer/prosumer use cases. Apple can ship basically one die every two years to cover their PC segment, and a stripped-down cousin for their mobiles; Intel and AMD need their cores to span segments from embedded up to supercomputers. I don't think there's much magic to what Apple did with the M1/M2, I think it's what would happen if a CPU design was constrained to and optimized strictly for the PC use case.

                Comment


                • #98
                  Originally posted by mangeek View Post

                  I'm not an expert, but Apple's M1 is purpose-built and somewhat limited to consumer/prosumer use cases. Apple can ship basically one die every two years to cover their PC segment, and a stripped-down cousin for their mobiles; Intel and AMD need their cores to span segments from embedded up to supercomputers. I don't think there's much magic to what Apple did with the M1/M2, I think it's what would happen if a CPU design was constrained to and optimized strictly for the PC use case.
                  "Need" is a very strong word. Nothing stops them from segmenting their products besides the binning stage.
                  Just as ARM (as a family if designs) is segmented with different designs for different use cases, neither AMD nor Intel have anything getting im their way. Heck, both have very decent people working in CPU, GPU and IO designs.

                  Comment


                  • #99
                    Originally posted by mangeek View Post

                    I'm not an expert, but Apple's M1 is purpose-built and somewhat limited to consumer/prosumer use cases. Apple can ship basically one die every two years to cover their PC segment, and a stripped-down cousin for their mobiles; Intel and AMD need their cores to span segments from embedded up to supercomputers. I don't think there's much magic to what Apple did with the M1/M2, I think it's what would happen if a CPU design was constrained to and optimized strictly for the PC use case.
                    If that was the case then Ampere, MediaTek, Qualcomm and Nvidia would have already made an ARM based chip with better performance. The only company who looked competitve was Nuvia which was run by an ex-Apple engineer. Qualcomm bought them so we should have some decent ARM laptops/desktops in 2023. Intel is aiming to compete with Apple with Lunar Lake in 2024, no clue about AMD. AMD has been hinting at releasing a new ARM chip.

                    Comment


                    • Originally posted by WannaBeOCer View Post

                      If that was the case then Ampere, MediaTek, Qualcomm and Nvidia would have already made an ARM based chip with better performance. The only company who looked competitve was Nuvia which was run by an ex-Apple engineer. Qualcomm bought them so we should have some decent ARM laptops/desktops in 2023. Intel is aiming to compete with Apple with Lunar Lake in 2024, no clue about AMD. AMD has been hinting at releasing a new ARM chip.
                      This. Nvidia kind of is comming tho with Grace, considering it is supposed to have 72 ARM v9 cores on single package.

                      Comment

                      Working...
                      X