Announcement

Collapse
No announcement yet.

Apple Announces Its New M2 Processor

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #71
    Originally posted by piotrj3 View Post
    X86 costs in efficiency due to that it is old architecture is 5% efficiency according to some Intel engineer. Internally processor is RISC like anyway, and only bonus is that you need to decode CISC into RISC.
    That doesn't make sense, x86 is from 1978 and ARM is from 1983. How can 5 years make such a difference when both are roughly 40 years old?
    Also both architectures have changed dramatically over time and are not really compatible with their first iterations anymore.
    12900k is around ~~10bln transitors (no official data but it is likely to be overestimate).
    RTX 3090 is 28.3 bln transistors (official data).
    M1 ultra is 114 bln transistor. In nutshell in size of one m1 ultra silicone you can get 3 times 12900k + rtx 3090.
    Thats hardly comparable. It's also hard to find exact numbers, I took the M1 die shot and compare it to a Zen 2 compute die. The latter has 8 cores and 32 MB cache with 3,800,000,000 transistors while the M1 has 4+4 cores, 12 + 4 MB cache and 2,100,000,000 transistors (just estimated from the die shots and the initial 16 bln trans).
    If you consider that M1 doesn't have AVX and other expensive extensions, I would bet that the performance core uses more transistors than a Zen 2 core (without AVX) and it kinda has to else you wouldn't get the same performance at lower clock speeds.
    Over half of the M1 die is just I/O stuff, controllers, NPU, DSP, ... and 1/4 is the GPU. I couldn't find exact data for the pro/max variants but I guess that mainly CPU and GPU got bigger and the rest stayed roughly the same.

    Comment


    • #72
      Originally posted by Anux View Post
      How can 5 years make such a difference when both are roughly 40 years old?
      Also both architectures have changed dramatically over time and are not really compatible with their first iterations anymore.
      I think these questions sort of answers themselves when put together. Intel spent 25+ years developing x86 into a performance platform, then the last 15 or so making it efficient. ARM was prioritizing efficiency the whole time, and hasn't seem a PC-class CPU since the M1.

      Comment


      • #73
        Originally posted by mangeek View Post

        I think these questions sort of answers themselves when put together. Intel spent 25+ years developing x86 into a performance platform, then the last 15 or so making it efficient. ARM was prioritizing efficiency the whole time, and hasn't seem a PC-class CPU since the M1.
        So how was Apple able to close the performance gap of ARM in 7 years without sacrificing the efficiency and AMD and Intel couldn't optimize efficiency in 20 years?
        There clearly were many times when Intel and AMD prioritized efficiency, first AMD with Athlon XP and later Intel with Pentium-M. I would love to see efficiency comparisons of those with ARM11. Also ATOM was a big push in that direction but it tried to compete in price with ARM and therefore was doomed to use old intel manufacturing processes.

        My bet is (i don't know else I would allready work for one of those companys), that the most efficiency is to be gained by shorter pipelines, using the newest process and staying in the 2 - 3 GHz range and thats something you can do with every architecture. You can see atleast on of those 3 in all atempts of low power implementations of any arch.

        Comment


        • #74
          Let me get this straight: Apple continuing marketing at "unordinary people" who do not know difference between hardware and software acceleration. Nvidia is moving away from gaming-tie to enterprise (data centers and miners) and targeting NVLink, CPU and other designs. Same as usual Nvidia is lying to investors and consumers but this time they were caught doing it. It didn't make the news, meaning people are ok with Nvidia lying to them so they will keep doing it. AMD still sucks at marketing and is targeting HPC, laptops and trying to improve gaming on desktops with higher CPU clocks and improved GPUs. What the **** is Intel doing? Intel delayed CPUs over and over, now they went into GPU space just to delay GPUs over and over. I expected to see industrial chillers but Intel didn't even show that off... disappointed!

          Did anyone else notice that the Apple marketing in the post said 6K external screen support while Michael said the media engine supports 8K... is that for the unordinary people editing 8K footage on a 1080p screen? Also, why did Apple have to get someone that doesn't speak English to prove that gaming is good in macOS... ROFL!

          Seriously though... I am looking forward to the new hardware releases coming out in the next 12 months. I think fanboys all around will be very confused as to who they need to be nitpicking over.

          Somewhat off-topic: I put a system together with mostly 10 year old parts. The system cost ~130 USD and it can play new games at high/ultra graphics settings at 1080p.

          Comment


          • #75
            I have only one question: does it need a fan?

            It might seem like a detail, but I sincerely think it's a massive element if it doesn't. Removing nearly all mechanical components of PCs, especially in laptops, is a serious step into the future.

            Comment


            • #76
              Originally posted by Anux View Post
              That doesn't make sense, x86 is from 1978 and ARM is from 1983. How can 5 years make such a difference when both are roughly 40 years old?
              Also both architectures have changed dramatically over time and are not really compatible with their first iterations anymore.

              Thats hardly comparable. It's also hard to find exact numbers, I took the M1 die shot and compare it to a Zen 2 compute die. The latter has 8 cores and 32 MB cache with 3,800,000,000 transistors while the M1 has 4+4 cores, 12 + 4 MB cache and 2,100,000,000 transistors (just estimated from the die shots and the initial 16 bln trans).
              If you consider that M1 doesn't have AVX and other expensive extensions, I would bet that the performance core uses more transistors than a Zen 2 core (without AVX) and it kinda has to else you wouldn't get the same performance at lower clock speeds.
              '
              Over half of the M1 die is just I/O stuff, controllers, NPU, ​​​U. I couldn't find exact data for the pro/max variants but I guess that mainly CPU and GPU got bigger and the rest stayed roughly the same.
              You can't say like that. Just by example Zen 3 has 24 pci-e lanes, 4x usb 3.2gen2, and dual channel memory controller directly on CPU. Those things are not wired through chipset but directly to CPU and that is for desktop. On laptops 6000 ryzen and Intel also has stuff like WIFI 6E, 6000 Ryzen has Pluton security core, fTPMs are on both and tons more features i probably forgot about. So saying that Apple has I/O is not honest to fact Intel and AMD also has tons of I/O on die. If you look at cores at 11900k (it has pretty good die shots) you will see cores in 11900k is not even half of die size, more like 1/4th. There is onboard GPU, interconnect, memory controller and tons of I/O.

              Here is some die shot for alder lake

              lyhmdzo6c3w71.jpg
              Yes you could say there is neural engine but hint : RTX 3090 has shitons of tensor cores and for AI it is by far fastest general purpose card outside of A100. We also in that comparison have integrated Intel GPU that is pretty much unused. So in that 38 bln transitorr count of RTX 3090 + intel 12900k a lot of those transistors are practically never used. In general it feels Apple is actually fairly wasteful in their transitors. Yes M1 has some impressive feats like you can connect all high performance devices to those USB ports and they won't throttle or that one single core on M1 has access to full memory bandwitdh. So there are stuff Apple benefits from. Still it is mostly wasteful when you compare it to 12900k+RTX 3090 transitor count.

              Comment


              • #77
                Originally posted by Mahboi View Post
                I have only one question: does it need a fan?

                It might seem like a detail, but I sincerely think it's a massive element if it doesn't. Removing nearly all mechanical components of PCs, especially in laptops, is a serious step into the future.
                Ditching the fan leaves a lot of performance on the table.

                Comment


                • #78
                  Originally posted by Slartifartblast View Post
                  That last Apple product I ever owned was an Apple III (yes a 3)
                  The last one I used was a Mac SE running System 7 that my father brought home from work and, aside from being unsettled by the lack of a command prompt (when my age was measured in single digits, for the record), I enjoyed it.

                  Originally posted by Slartifartblast View Post
                  and I can proudly say I've never owned one since.
                  I'm keeping my eye out for a good price on something which can run pre-OSX macOS to add to my retro-hobby corner. (Preferrably something with no integrated monitor so I can use an adapter cable to put it on the same KVM that already shares the keyboard, mouse, and screen between my DOS/Win311/Win98SE P133 and my Win98SE/WinXP Athlon64. (I still need to save up for a good scan converter so I can add my trash-picked Atari ST to the mix.)

                  As a platform for work, I wouldn't use a mac but, as something to add to my non-console "wall o' consoles", classic MacOS does have a timeless draw to its UI design.

                  Comment


                  • #79
                    Originally posted by piotrj3 View Post

                    You can't say like that. Just by example Zen 3 has 24 pci-e lanes, 4x usb 3.2gen2, and dual channel memory controller directly on CPU. Those things are not wired through chipset but directly to CPU and that is for desktop.
                    No that's only Zen 1 all later Zens have a separate I/O die, that's why I took a Zen 2 compute die.

                    So there are stuff Apple benefits from. Still it is mostly wasteful when you compare it to 12900k+RTX 3090 transitor count.
                    The M1 also has an SSD controller, additional 8MB cache, DSP, imageprocessor, ... and fights with a 5GHz CPU at 3GHz, that is only possible with more transistors.

                    Comment


                    • #80
                      Originally posted by grung View Post
                      Quick question to people here claiming that M1 is great and arm is a future - do you have M1 Mac?
                      My experience with M1 pro 32gb is not great and I think it is inferior machine CPU wise and using x86 binaries is horrible experience (to me).
                      Every App (native/Rosetta) startup reminds me of Firefox with snap
                      Due to temporary migration issues, my work issued me both an Intel Mac and an M1 Mac. My experience is that M1 is vastly superior. Docker images build faster, tests run faster, Google Meets doesn't make the computer hot and the battery last at least twice as long. I'd also mention neither the builds nor Meets make the fans spin on the M1, but ermmmm, that would be cheating.
                      That said, the Intel one seems to be a lesser model. I'm not an expert in Macs, but both are MacBook Pros 13", the M1 is 2020 and has 16GB of RAM and a 512GB SSD and the Intel 2019 with 8GB of RAM and 128GB SSD. So, it very well just be the extra RAM that makes the difference, I can't really tell (and the disk makes a huge difference in that I don't need to rebuild images as often because the disk gets full :^) ).
                      Also I don't run x86 binaries, so I have no idea how those behave.

                      Comment

                      Working...
                      X