Announcement

Collapse
No announcement yet.

Apple Announces Its New M2 Processor

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    - Apple claims the M2 can deliver 87% the performance of a 12-core PC (Windows) laptop chip at 25% the power consumption.
    And maybe up to 60 % of unoptimized Linux distribution on the same hardware.

    Comment


    • #62
      Originally posted by luno View Post

      isn't their Kernel Open Source too ?
      It doesn't matter, because it's broken POS.

      Comment


      • #63
        Originally posted by luno View Post
        isn't their Kernel Open Source too ?
        More or less. They have XNU which is something like a kFreeBSD+Mach Frankenstein's monster I think? But it's BSD licensed and their release is closed source and probably many of the features this depends on are part of whatever closed extensions they add. Besides, pretty much nobody else uses it, so it's still just whatever Apple puts into it rather than what the shared effort in the Linux kernel achieved in terms of performance. Yeah, you're free to do whatever with the code in that repo, but there's not a lot you can do that is actually useful and it's probably not enough to build a custom kernel that will boot your MacOS.

        Comment


        • #64
          Originally posted by Developer12 View Post

          X86 chips still pay the price despite all the instruction caching they claim. There's no free lunch for having a bad ISA. That caching is of limited size, consumes massive amounts of die area in addition to the decoding circuitry, and the ISA still imposes a low limit on how quickly you can decode *new* code while following program execution. Since the dawn of pentium, x86 has always spent more than double the number of transistors to achieve the same performance.
          X86 costs in efficiency due to that it is old architecture is 5% efficiency according to some Intel engineer. Internally processor is RISC like anyway, and only bonus is that you need to decode CISC into RISC.

          In fact 6xxx series mobile CPUs for AMD trade blows very well in efficiency vs M1. Losing just a litle in ST scores but actually winning vs M1 on 8 core configuration per watt.

          By far biggest impact is Apple's memory configuration (RAM chips soldered very close to CPU) so apple pays smaller price for something not in cache, probably less complicated I/O part of die because of closed platform.

          And lastly number of transistors ... oh boy.

          12900k is around ~~10bln transitors (no official data but it is likely to be overestimate).
          RTX 3090 is 28.3 bln transistors (official data).

          M1 ultra is 114 bln transistor. In nutshell in size of one m1 ultra silicone you can get 3 times 12900k + rtx 3090.

          Now yes M1 ultra is faster in multithreaded workloads then 12900k and consumes less power. But we talk here about insane transitor diffrence. 12900k spending 5 times number of transitors on just E cores would bring huge efficiency improvments if they were tuned towards let's say 3GHz. And in single threaded 12900k wins.

          At least in CPU war Apple is competitive. But when we get to rtx 3090 ... oh boy . In blender/V-ray etc. (and i am talking about CUDA/Vulkan vs Metal) we talk about... 500% performance diffrence for rtx 3090. In fact RTX 3090 stock, is more power efficient per work done on entire computer consumption then M1 ultra. And RTX 3090 is seen as "The inefficient one that when you drop TDP to 75% you still retain 96% of performance".

          Comment


          • #65
            Originally posted by Developer12 View Post
            Since the dawn of pentium, x86 has always spent more than double the number of transistors to achieve the same performance.
            Must be why x86 chips, despite having less transistor density (because built on inferior node) are still faster than ARM right? Or why the fastest supercomputer is x86 based huh?

            I don't think performance means what you think it does. If you mention power efficiency please never touch the internet again.

            Apple fanboys are more delusional than clowns eating Russian propaganda.
            Last edited by Weasel; 07 June 2022, 07:47 AM.

            Comment


            • #66
              Originally posted by piotrj3 View Post

              X86 costs in efficiency due to that it is old architecture is 5% efficiency according to some Intel engineer. Internally processor is RISC like anyway, and only bonus is that you need to decode CISC into RISC.

              In fact 6xxx series mobile CPUs for AMD trade blows very well in efficiency vs M1. Losing just a litle in ST scores but actually winning vs M1 on 8 core configuration per watt.

              By far biggest impact is Apple's memory configuration (RAM chips soldered very close to CPU) so apple pays smaller price for something not in cache, probably less complicated I/O part of die because of closed platform.

              And lastly number of transistors ... oh boy.

              12900k is around ~~10bln transitors (no official data but it is likely to be overestimate).
              RTX 3090 is 28.3 bln transistors (official data).

              M1 ultra is 114 bln transistor. In nutshell in size of one m1 ultra silicone you can get 3 times 12900k + rtx 3090.

              Now yes M1 ultra is faster in multithreaded workloads then 12900k and consumes less power. But we talk here about insane transitor diffrence. 12900k spending 5 times number of transitors on just E cores would bring huge efficiency improvments if they were tuned towards let's say 3GHz. And in single threaded 12900k wins.

              At least in CPU war Apple is competitive. But when we get to rtx 3090 ... oh boy . In blender/V-ray etc. (and i am talking about CUDA/Vulkan vs Metal) we talk about... 500% performance diffrence for rtx 3090. In fact RTX 3090 stock, is more power efficient per work done on entire computer consumption then M1 ultra. And RTX 3090 is seen as "The inefficient one that when you drop TDP to 75% you still retain 96% of performance".
              Doesn't the 114b number include memory and various IO modules? I understand you point but it's not an apple to apple comparision to begin with...

              Comment


              • #67
                Quick question to people here claiming that M1 is great and arm is a future - do you have M1 Mac?
                My experience with M1 pro 32gb is not great and I think it is inferior machine CPU wise and using x86 binaries is horrible experience (to me).
                Every App (native/Rosetta) startup reminds me of Firefox with snap

                Comment


                • #68
                  Originally posted by kgardas View Post
                  - RAM integration -- fantastic choice for *common* case. Hmm, in comparison with my Xeon W with 256GB RAM, M1/2 is still just a toy right? -- but for *common* *consumer* workload, fantastic.
                  I've been saying this for a while, but if Intel made a similar consumer CPU package that had 16GB of very high speed RAM next to the processor on one bus, and then had any additional/user-added RAM hang off of a CXL link, I think they'd sell like hotcakes and reduce their costs. They could make one part that covered 90% of desktop/laptop use cases, maybe laser-off half of the RAM or a few cores for the 'low end' models.

                  I really don't think Intel is doing themselves any favors by making 80+ flavors of Alder Lake fore every use case. Just make the one I mentioned above for 'casual computing' (with a handful of clock limits to stratify the market) and call them "Evo '22 [7/5/3]"

                  Comment


                  • #69
                    Originally posted by lilunxm12 View Post

                    Doesn't the 114b number include memory and various IO modules? I understand you point but it's not an apple to apple comparision to begin with...
                    As far as i know it doesn't include memory. But I/O is included on intel 12900k too. I dare to say more complicated I/O in fact.

                    Comment


                    • #70
                      Originally posted by qarium View Post
                      this all has a logical error: linux on apple M1 is faster on CPU tasks than MACOS...

                      apple can compensate the inferior product "macos" with their good hardware.
                      Originally posted by Developer12 View Post
                      your point about lock-in would almost be true, if it weren't that linux seems the same performance on the M1.
                      You do both realize that you can't yet daily drive Linux on an M1 Mac, right? Running a few synthetic benchmarks through a primitive interface isn't exactly an Apples to apples (pun intended) comparison.
                      This was the same sort of thinking back in the days where games ran faster in WINE vs Windows, because WINE simply lacked the rendering capabilities. It essentially was running games at a lower detail level.
                      Strip down MacOS to just a command line (which last time I checked, is actually possible - not sure if it still is), I'm sure the benchmarks would turn out roughly the same.
                      Run Linux with GNOME or KDE with compositing effects on and benchmark graphical programs like Chrome or a game and Linux isn't going to have such an obvious lead anymore.

                      I'm not favoring Apple here, I'm just saying that when all things are equal, they do a damn good job optimizing form and function. We can whine about their closed nature all day but they know what they're doing. Apple might not have the most optimal solution for each individual program but their forced homogeneity allows more complex software to run more efficiently. In the modern world, everything is layered with abstraction. Apple is effectively removing some of these layers.

                      but just think about this: what if apple switch to the linux kernel ?...
                      It's loosely based on a BSD kernel. Some here would argue BSD is even faster than Linux.
                      Last edited by schmidtbag; 07 June 2022, 09:19 AM.

                      Comment

                      Working...
                      X