Announcement

Collapse
No announcement yet.

Apple Announces The M1 Pro / M1 Max, Asahi Linux Starts Eyeing Their Bring-Up

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #91
    Also, here's more evidence that Apple's advantage isn't simply a matter of process node. The Snapdragon 865 and Apple's A13 were both made on TSMC's N7P process:


    At the bottom of that page are the aggregates, where we see the A13 does burn more power, but also delivers a lot more performance. In SPECint 2006, it delivers 29% better efficiency (4.74 vs. 3.67 SPEC marks per kJ). In SPECfp 2006, it delivers 11% better efficiency (11.18 vs 10.09 SPEC marks per kJ). However, do note the disclaimer about their power figures, at the top of the page, which calls into question the accuracy of Snapdragon's energy usage metrics. There's no denying the A13's performance advantage (58.5% and 36.8% for int & fp, respectively), but its efficiency could actually be even better (or worse) than what these figures indicate.

    Of course, the cores in the M1-series chips are A14-derived. The only reason to look at the above benchmarks was to investigate the claim that Apple's advantage is simply due to process node.

    Of course, there are other factors in Apple's favor, which have been mentioned in preceding pages. They can afford to make larger chips with bigger caches, for instance. Perhaps we could independently investigate that question, if we knew how well individual SPEC tests responded to cache size and could look only at those which didn't. However, I think it's generally well-established that SPEC 2006 suite, overall, benefits rather little from increased cache sizes.
    Last edited by coder; 25 October 2021, 02:13 PM.

    Comment


    • #92
      Originally posted by coder View Post
      Also, here's more evidence that Apple's advantage isn't simply a matter of process node. The Snapdragon 865 and Apple's A13 were both made on TSMC's N7P process:
      I think everyone agrees that Apple's silicon is way ahead of the competing ARM designs. The more interesting comparison at this point is against x86.

      Although it looks very impressive now, it sounds like Intel's Alder-Lake-P will come out in a few months and be even faster, while still on a worse manufacturing node. It will be much less efficient, but it's probably too early to say exactly how much so (after you adjust for TSMC 5nm).

      AMD is definitely a year behind at this point.

      Comment


      • #93
        Originally posted by NateHubbard View Post
        OTOH, unsupported hardware makes it difficult for someone to switch operating systems.
        Fair enough!

        Comment


        • #94
          Originally posted by spykes View Post
          I believe a big part (not the only one) of Apple technical advance is due to their exclusive access to the best manufacturing node at TSMC.
          I really hope Intel will manage to catchup on TSMC at some point, otherwise X86 PC will never catchup on Apple ARM.
          Well then you’re just ignorant. And willfully so.
          I’ve written 300 pages of technical detail on how Apple get their performance, but you’d rather ignore all that?

          Comment


          • #95
            Originally posted by zeealpal View Post

            This has always been the part I don't get, yes Apple design CPU's well. But a decent proportion of their advantage comes from the amount of silicon they use, and that the use the latest nodes. That let's them make fatter, slower clocked cores with a higher IPC to get power savings, but if competitors used the same transistor count at the same process node, most of the advantage would probably not stick.

            A Ryzen 58XX/59XX mobile APUs 8C/16T with caches, IO and 8 CU GPU totals 10.7b transistors while the M1 Pro is 37b and M1 Max is 57b. AMD could literally triple the core count, GPU size and still have a smaller chip.

            Obviously Apple has a lead, but a lot of that is $$ spent on silicon area, as the Price vs Performance vs Area tradeoffs for XPU design Apple has just said fuck-it to the Area to win the rest and absorb the higher cost of the M1 into the products price. As a consumer buying the products, that's great, but I don't think the advantage will last ongoing, especially as they still have a monolithic chip design.
            Again massively ignorant.
            Apple’s chips are just not that large compared to alternatives at the same space. They use more transistors — but that’s because INTC and AMD *choose* to use high speed (and v large) transistors rather than slower smaller transistors.

            Compare Apple area to the area of a high-end INTC desktop chip plus an equivalent dGPU…
            And you’d still be missing a lot of what Apple has. Not just the NPU and ISP stuff, of course, but also the security stuff or the DMA stuff. You’re starting from “a I know what the truth must be, therefore Apple …” rather than “Let’s go out and find out about Apple then make up my mind”.

            Comment


            • #96
              Originally posted by sdack View Post
              Apple just seems to throw together existing technologies into single chips, calls it a day, but does not innovate any actual new technologies. This they leave to other companies.

              So it is interesting to see that there is no word on persistent memory technologies in the design presentations, while Intel is pushing it and Intel will no longer put their faith into DDR alone, but support HBM as well.

              This just tells me that Intel understands the need for faster memory, but also that data needs to be stored somewhere and how to avoid bottlenecks further down in the coming architectures. Or ask yourself, what is the point of 200 or 400GB/s memory transfer rates, when you only have 64GB of RAM and need half a minute to load and save your work?

              These new M1 Apples are certainly interesting and a hot topic, but I am not much impressed by it. After following 40 years of changes in computer architecture is this nothing more than a modern SoC design. I am still more often looking towards Intel (and AMD) to see what technologies are coming next.
              What counts as an “actual new technology”?
              We (ie the human race) have a pretty good idea of the detailed micro-architecture of Apple cores and it is unlike anything I’ve seen anywhere else — not INTC or AMD, not POWER or Z, not ARM. So where exactly do you think they got it from?
              Like most commenting here, you’re certain you know exactly how an Apple CPU or SoC works, though you don’t have a clue. A year ago that was justifiable; but it no longer is.

              Comment


              • #97
                Originally posted by name99 View Post
                I’ve written 300 pages of technical detail on how Apple get their performance,
                Hello. I've seen you post on Anandtech & realworldtech. Thanks for your scholarship and sharing it with all of us!

                A lot of people here won't have followed your writing, so I think it would be helpful to share your links before going on the offensive.
                : )

                Comment


                • #98
                  Originally posted by coder View Post
                  Hello. I've seen you post on Anandtech & realworldtech. Thanks for your scholarship and sharing it with all of us!

                  A lot of people here won't have followed your writing, so I think it would be helpful to share your links before going on the offensive.
                  : )
                  Strange how I'm the guy whose in the wrong for making a correction, not the guy who speaks loudly from a position of ignorance...

                  I'll post the link here, but honestly I fully expect most people to refuse to read it, or to refuse to believe what they read. That's the world we live in -- tribalism trumps facts for 90% of humans.
                  Sorry to be so negative, but this is not my first rodeo.

                  https://drive.google.com/file/d/1WrM...L_bjuJSPt/view

                  Comment


                  • #99
                    Originally posted by name99 View Post
                    Strange how I'm the guy whose in the wrong for making a correction, not the guy who speaks loudly from a position of ignorance...
                    Nah, bro. It's not like that. I fully respect you and your work. I'm just trying to be helpful in sharing the insights you've made and publicized.

                    If you look at earlier pages, you'll see that I've tried to do the same & cited your work in the process. I'm sorry if it seemed like a reprimand. I was pleasantly surprised to see you posting here!
                    : )

                    Originally posted by name99 View Post
                    I'll post the link here, but honestly I fully expect most people to refuse to read it, or to refuse to believe what they read. That's the world we live in -- tribalism trumps facts for 90% of humans.
                    I understand your perspective, but I think the dividing line isn't that clear. There are definitely people who cling to their preconceived notions above all else, but there are some with a bias who are more easily enlightened.

                    A few years ago, I was an Apple skeptic, myself. It's only through repeated exposure to information that challenged my assumptions that I came around to see and accept what they've accomplished. Many here don't follow hardware development as much, and won't have had the exposure that I've experienced.

                    Yes!

                    Comment

                    Working...
                    X