Announcement

Collapse
No announcement yet.

Apple M1 ARM Performance With A 2020 Mac Mini

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by blackshard View Post
    Anandtech is comparing against 10th gen intel cpus (which IMHO has the same IPC of 9th gen, but with more aggressive turbo due to power levels) and Zen3, do you have any valid proof of what you say?
    I have direct proof through Geekbench 5 scores. I'd give you a link, but my last post just got blocked apparently due to the inclusions of links. So PM me if you want to know more.

    Originally posted by blackshard View Post
    Also anandtech review is saying exactly the opposite of what you say: single core performance is great on M1 (https://www.anandtech.com/show/16252...le-m1-tested/4), multicore is less great. All of this without any turbo boost and other crappy trickery mistakes Intel put in the field to mask the limits of x86 and AMD needed to follow.
    You're not reading my post correctly. "When compared against top-line CPUs that can be had at the same cost like the i7-10875h, the M1 mostly loses outside of the single-core performance." See?

    Originally posted by blackshard View Post
    Power efficiency is not battery life matter...
    I used to run server farms with over a hundred nodes so I understand the impact of power and thermal constraints. EDIT: I had originally estimated the SOC could approach 70W peak power, but I was directed to the analysis by Anandtech which shows my estimate is too high. I now estimate that peak combined CPU + GPU power consumption probably would not exceed 39W and Anandtech showed a peak draw of 31W. Thanks to PerformanceExpert for the guidance.

    EDIT: Forbes shows a battery life of just 4.5hrs in real life. That's *average* power draw of 13W, which is worse than my i7-10875H laptop (12W) with 50% less duration (7hrs). So it appears the *real* power draw may be more than the hand-picked Apple press is reporting.
    Last edited by deppman; 22 November 2020, 03:36 AM. Reason: Add real review results

    Comment


    • I have written for a while and have gotten flamed like a Whopper hamburger for it, that the Age of ARM is upon us and that the FOSS and Linux world had better ball up and start acting like it.

      Now comes this very interesting article from someone who looks at tech monopolies but now sees a VERTICAL monopoly of sorts developing from within the Tech Giants. From this guy's perspective, the x86 world has failed to keep up with the increase in compute loads especially those compute loads that demand AI, Machine Learning or acceleration using FPGA's

      He sees all the Tech Giant building out custom chips and SoCs for THEIR specific needs and increasingly bypassing more generalized and non optimized for the compute needs offerings for Intel and AMD. I have been harping on this for years and also telling folks you had better wake up to what Apple was going to do and what they have just done which is permanently leave the x86 world and Intel specifically to go with ARM and their custom version of it.

      Here are some snippets. The link to the entire article is below. Enjoy.



      There is a shifting relationship between the largest software companies in the world and their suppliers, and as the leading software companies have become ever-larger portions of the compute pie, it’s kind of become the problem of the tech companies, and not the semiconductor companies that service them to push forward the natural limits of hardware. Software ate the world so completely that now the large tech companies have to deal with the actual hardware that underlies their stack.

      Especially as some companies like Intel have fallen behind.

      So clearly this is top of mind at many of the tech companies around the world. I had wanted to write about this in February of this year, but just a few months later and we’ve seen the thesis play out in a big way. Apple’s M1/AX chips, AWS’s Graviton, Azure’s Catapult, and heck even Facebook is rumored to be starting their own chip platform. I don’t see it stopping any time soon, in fact, I think this will accelerate.

      I believe that in a few years, most of the large tech companies will have a much tighter level of integration and we will likely see much less “commoditized” platforms. Yes, they might run on partially open stacks (think open networking roadmap and Facebook) but their differentiation is going to be not only software but also hardware. We are going back to the old patterns of integration of both Software and Hardware.

      Microsoft’s silicon plays are a lot quieter than the others, and I only have really started to peel the onion back after learning about Inphi. In particular, they have a relatively novel datacenter peering strategy dependent on ColorZ, but I also believe that they will start to walk the way of custom silicon very soon. An example of this is the ARM-based surface that Microsoft has been designing. This is pretty striking if you remember the Wintel alliance, as it seems that Microsoft is willing and ready to give the marriage up.

      In particular, the platform they have talked the most on, and have the most progress for custom silicon is the edge. Azure Sphere in particular is a new platform that is anchored by their Pluton chip to improve security and is a highly opinionated ARM-based ecosystem promising security and performance.

      Just imagine now that you are an entrant, trying to sell IaaS, maybe like Digital Ocean (huge fan). If Intel and AMD chips are all that you can use, you better pray and hope their roadmaps are strong, because now that your competitors are able to create and expand their own roadmaps faster than the large semiconductor platforms, you may be forced to eventually buy from them or just be at a structural gross margin disadvantage. You could offer identical services but make worse profits, just on the basis that you don’t make your own chips. If they lower prices, you could even lose money! You cannot compete.

      But before we cry wolf, there is a company that is pretty well aware of this and is now the largest post-Intel semiconductor company around; Nvidia. Their acquisition of ARM is really important, and while it was expensive, ARM is going to inevitably be embedded into every single roadmap I mentioned above. In fact, the majority of the custom products are ARM-based, and Nvidia knows this. Nvidia is positioning itself as a large and independent silicon platform in the AI age. Like the Intel of yesteryear. Nvidia now will be a relevant company no matter what happens with the tech platforms pushing forward.


      https://mule.substack.com/p/the-tech...es-go-vertical

      Comment


      • Originally posted by geearf View Post

        I am sorry if I misunderstand, but I thought the post I was replying to said there was already translation from CISC to RISC, happening directly on the CPU. Was that wrong?

        Thank you!
        The point I was making is that any translation adds overheads so it's always faster to run native code. So the performance difference is purely due to the CPU being faster.

        No, strictly speaking there is no CISC to RISC translation happening. CPUs decode instructions into micro-ops. These are very similar to the original ISA, but fixed-width and more orthogonal. Most instructions are 1 micro-op, but complex instructions may take 2 or more. It's wrong to equate micro-ops with RISC since they still contain all of the properties of the original ISA. So it isn't possible to add an Arm decoder to an x86 core or an x86 decoder to an Arm core. The differences are not purely limited to the decoder (as often claimed) but all over the CPU.

        Comment


        • Originally posted by deppman View Post
          I used to run server farms with over a hundred nodes so I understand the impact of power and thermal constraints However, further analysis of the "10-20W" claim being recited here only can apply for modest desktop usage. The power required under load will be much greater. The CPU idles at 4W, rises to 20W under load, and will probably peak at 25W. The GPU will likely add another 20-40W on top of that. So we could see the M1 approaching 70W peak power. Again, this is good, just not as revolutionary as Apple would have you believe. Yes, I have sources. PM me if you want to know more.
          Peak all-core system power is about 31W. Peak system with GPU is about 22W according to AnandTech. So obviously nowhere near 70W! Clearly the CPUs and GPU are very efficient and the on-package DRAM will help reducing power.

          Comment


          • Originally posted by artivision View Post
            Fore those who cannot count cores, Apple's benchmarks come from 4 cores and not 8. Big out of order cores cannot be used along with small in order cores. Even if they did, those small 4 cores together will just par on big core. Apple's processor is the best commercial processor SOC by far.
            You are right for android, it's *EITHER* 4 big or 4 small. However Apple makes all 8 visible to the OS. If you look at the Anandtech's Spec 2017 FP numbers there's an entry for using 4 cores and one for using 8.

            Comment


            • Originally posted by BillBroadley View Post

              You are right for android, it's *EITHER* 4 big or 4 small. However Apple makes all 8 visible to the OS. If you look at the Anandtech's Spec 2017 FP numbers there's an entry for using 4 cores and one for using 8.
              Kinda OT: I'd be willing to bet Apple spent a huge effort optimizing the pipeline just for Spec, which necessarily means Spec results mean nothing and are incapable of reflecting its true performance on any other workload. I could totally see Apple tweaking the pipeline just specifically to get the highest score possible... Relying on a Spec result in this case to make a judgement almost certainly won't reflect real world results. I mean just look at this article as a perfect example, Apples product totally dominates the Spec benchmark, but then loses almost every single real benchmark which Spec supposedly reflects.... It's pretty obvious by just looking at this review alone that Spec results for this product don't reflect actual performance.

              I feel the only way to get a true picture of its performance you'd have to benchmark the real thing. If you want to know how GCC performs, benchmark it. If you need to know about x264, benchmark it. If you need to know about blender, benchmark it... And this article proves it to me...
              Last edited by duby229; 22 November 2020, 01:57 AM.

              Comment


              • Originally posted by PerformanceExpert View Post

                The point I was making is that any translation adds overheads so it's always faster to run native code. So the performance difference is purely due to the CPU being faster.

                No, strictly speaking there is no CISC to RISC translation happening. CPUs decode instructions into micro-ops. These are very similar to the original ISA, but fixed-width and more orthogonal. Most instructions are 1 micro-op, but complex instructions may take 2 or more. It's wrong to equate micro-ops with RISC since they still contain all of the properties of the original ISA. So it isn't possible to add an Arm decoder to an x86 core or an x86 decoder to an Arm core. The differences are not purely limited to the decoder (as often claimed) but all over the CPU.
                Yes I understood that point, that's pretty much the one I was making in my original post.

                As for the explanation, I got everything but "ISA". I have a vague understanding of the word, but if you don't mind, I'd appreciate if you could explain what you meant by "original ISA".

                Thanks!

                Comment


                • Originally posted by BillBroadley View Post

                  You are right for android, it's *EITHER* 4 big or 4 small. However Apple makes all 8 visible to the OS. If you look at the Anandtech's Spec 2017 FP numbers there's an entry for using 4 cores and one for using 8.
                  Correct. Apple Silicon SoCs have been able to use all cores, Hi-Perf and Low Power, simultaneously since the A11 which debut in 2017 in the iPhone 8, 8 Plus and X.

                  From the wiki

                  " The A11 uses a new second-generation performance controller, which permits the A11 to use all six cores simultaneously,[9] unlike its predecessor the A10. "

                  https://en.wikipedia.org/wiki/Apple_A11


                  Comment


                  • Originally posted by PerformanceExpert View Post
                    Peak all-core system power is about 31W. Peak system with GPU is about 22W according to AnandTech. So obviously nowhere near 70W! Clearly the CPUs and GPU are very efficient and the on-package DRAM will help reducing power.
                    Sorry, I had not seen those and instead got the numbers from Ars (22W, CPU only) and extrapolated from there. If so, the power efficiency does indeed appear more impressive than I estimated. However, I bet a demanding game would push those numbers higher than 31W (18+17+4, so perhaps 39W). I will edit my original post accordingly.

                    EDIT: Forbes shows a battery life of just 4.5hrs in real life. That's *average* power draw of 13W, which is worse than my i7-10875H laptop in both average power draw (12W) and battery life (6hrs). So it appears the *real* power draw may be more than the hand-picked Apple press is reporting.
                    Last edited by deppman; 22 November 2020, 11:31 PM.

                    Comment


                    • The Age of ARM is here...and not just with Apple.

                      Snippets from an article from one of the leading ChromeOS blogs, Chromeunboxed.

                      " This week, MediaTek announced a whole bunch of stuff at their annual summit, but buried right in the middle of everything they spoke of, there were 2 new chips officially announced that will be added to the current MediaTek SoC (the MT8183 currently on display in the Lenovo Chromebook Duet and a few others) to round out MediaTek’s Chromebook offerings. Gabriel posted about this announcement on Tuesday if you’d like the overall breakdown on those two new chips – the MT8192 and MT8195 – but for today, we want to talk about why these chips are important moving forward.

                      Though there are differences in chips with the same Cortex cores, they aren’t wildly different. In a side-by-side of Cortex-A76 chips from MediaTek and Qualcomm, there’s only a 7% performance gain in Qualcomm’s chip and some of that can be chalked up to the 7nm process in the Snapdragon 855 versus MediaTek’s 12nm in the Helio G90. With these new Chromebook chips from MediaTek being 7nm (MT8192) and 6nm (MT8195) processes, that won’t be nearly as much of a gap.

                      These new ARM chips from MediaTek will only make that experience better and better as seen with little updates to games like PUBG Mobile for Chromebooks with MediaTek ARM chips inside. Where we see this game still struggle on far more powerful Intel-powered Chromebooks, it runs quite well on even the under-powered Lenovo Chromebook Duet. There’s no question about it: Android apps work much better on ARM-powered Chromebooks.

                      Now that we have Qualcomm taking their first dive into the Chromebook waters and MediaTek clearly ready to step fully into the market with their best hardware foot forward, I think the shift to ARM is truly beginning for Chrome OS. Will Google – like Apple – go all in on ARM and leave Intel by the wayside? Not any time soon, if ever. Instead, Chrome OS is an operating system that is flexible enough to handle both ARM and x86 natively without emulators, allowing the user to choose what is more important to them in a device. If absolute power is the goal, you’re likely going to be in the Intel camp for some time. If thin, light, silent, long-lasting Chromebooks that run Android apps like a champ are more your speed, hold on just a bit longer. The ARM revolution is coming for Chromebooks in 2021, starting with MediaTek, and we’re in for a very interesting ride. "


                      https://chromeunboxed.com/mediatek-a...-mt8192-mt8195

                      Comment

                      Working...
                      X