Announcement

Collapse
No announcement yet.

Apple M1 Performance On Linux: Benchmarks Better Than Expected For Its Alpha State

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Processor Details - Asahi Linux Alpha: Scaling Governor: apple-cpufreq schedutil
    Imagine the floor wiping Linux had done with MacOS if only it had been unleashed to show its true performance potential...

    Comment


    • #32
      Originally posted by vladpetric View Post
      Apple M1 - best processor on the planet at this time. Can issue up to 12 instructions per cycle (+ 4 vector instructions ... ).
      I don't get it. So does M1 support vector instructions or not? I have not seed SVE/SVE2 support, does it use some custom extensions?

      Comment


      • #33
        Apple fan Boys vs Apple haters. This is ridiculous! Even more on a Linux website. M1 is never going to be fully supported by 3rd party Operating Systems, but this is a funny experiment. ARM isn't a fully Open Source friendly compared to x86, despite this last one has binary blobs and depending on proprietary firmware (coreboot is a minority).

        Comment


        • #34
          Originally posted by andre30correia View Post
          Macos is really a piece of garbage OS, a lot of benchmarks faster in Linux.. and The story about its the fast cpu is another big lie created only in geekbench that bench paid by apple
          For many years I was on the "Macs are glorified toys" bandwagon. After using Macs (to speedily get dev and managerial tasks done) for the last 5 years, in the words of Barry Kripke, you can go suck a wemon !

          Comment


          • #35
            Originally posted by xhustler View Post
            For many years I was on the "Macs are glorified toys" bandwagon. After using Macs (to speedily get dev and managerial tasks done) for the last 5 years, in the words of Barry Kripke, you can go suck a wemon !


            I don't know what kind of "infection" you had, but I was forced to use Mac for years and still forced to use them. I still think they are glorified toys.

            It may be cool for uneducated users wanting an easy system or certain kinds of users, I agree it may become quite easy to use for them.

            But for hardcore nerds and thinkerers, it's a walled garden with zillions of limitations.

            Anyway, Windows is the most popular and used desktop Operating System by a very extreme margin. So I consider nothing can be worse than that.
            Last edited by timofonic; 24 March 2022, 05:10 AM.

            Comment


            • #36
              Looks like macOS is technically still stagnate in ≤2010. Isn't it a shame that a 3rd-party OS performs better, even with just the basic drivers and no optimizations?

              Similar case to the only allowed browser engine on iOS, WebKit, which is outdated in case of supported standards and features, and develops quite slowly. Blink on iOS would be a shame to Apple, how poor their limited engine is.

              Comment


              • #37
                Originally posted by timofonic View Post

                I don't know what kind of "infection" you had, but I was forced to use Mac for years and still forced to use them. I still think they are glorified toys.

                It may be cool for uneducated users wanting an easy system or certain kinds of users, I agree it may become quite easy to use for them.

                But for hardcore nerds and thinkerers, it's a walled garden with zillions of limitations.

                Anyway, Windows is the most popular and used desktop Operating System by a very extreme margin. So I consider nothing can be worse than that.
                Hehehehehe ... You remind me of my younger self - wall paper with the tux swatting a Windows butterfly, Windows sucks - Linux is king chants and arguments et cetera. I started playing with Solaris in 1998 then Linux a year later when I was gifted a Slackware Linux disc set. Since then, I have done most of the fun boy + a ton of Linux nerd stuff.

                There are a gazillion ways to skin a cat - the challenge with the Linux ecosystem in general is having to sift through all the gazillion ways as hardcore nerds and tinkerers are bound to do just to get simple tasks done. Am pushing on to 45 and run an ISP. Using MBPs as my daily driver has helped me save tons of hours whilst executing my daily tasks. On the other hand, 99.9% of my server infrastructure is running on Linux and I don't see any other contender in this space.

                Am I elated that Alyssa and the Asahi Linux team are working on panfrost and getting Linux to run on great hardware (anyone who doesn't think Apple hardware is kickass - suck a wemon ! ) - Yes, yes and YES.

                That doesn't change the fact that the Linux DE ecosystem is at best unpolished with hundreds of crappy UGLY apps which do so much but are not great at any particular task - save for a handful.

                Comment


                • #38
                  Why is it so hard to accept that Apple built a great chip? Seriously, it's honestly cringe already when I see AMD versus Intel but now the M1 is also bad? Why?
                  I've seen enough people "in the know" (as in actual devs) praising the M1. For its tdp it's without rival, the MacBook Air is even passively cooled. Look at x86 laptops that are passively cooled and which performance they offer.
                  That doesn't mean I like Apple, at all, but on a technical forum I would expect people are able to ignore their bias and being able to see that the M1 is a great technical achievement.

                  Comment


                  • #39
                    Originally posted by ldesnogu View Post
                    It's not because there are 12 units that all can be fed at the same time. For instance the rename engine could be limited to 8 instructions per cycle thus limiting issue per cycle.
                    It is. (please read H&P)

                    The rename engine is in the frontend, this is the out-of-order scheduler. The rename engine feeds the instruction window. The out-of-order scheduler takes instructions from the instruction window and issues them. But the instruction window acts as a buffer between the two stages.

                    Now you're not going to be able to feed 12 instructions very often (they have to be in the instruction window, with the right mix, and the inputs ready that cycle). The sustained IPC for workloads without a lot of last level cache misses or branch mispredicts is still going to be a fraction of the max. But, guess what, the same is true for the Cortex A72 (the rpi4). How many integer instructions can the Cortex A72 issue a cycle? 6. How many vector instructions? 2. Though Cortex A72 also has much crappier branch prediction and cache hierarchy, so in practice I've seen differences in IPC (instructions per cycle, while ignoring clock) much higher than 2x.

                    Comment


                    • #40
                      Originally posted by drakonas777 View Post

                      I don't get it. So does M1 support vector instructions or not? I have not seed SVE/SVE2 support, does it use some custom extensions?
                      it supports NEON (128 bit SIMD). While it doesn't support SVE, the NEON code should run faster because M1 has a lot of issue width for NEON code. So M1 can get a lot of parallelism with NEON SIMD operations (it can issue as many as 4 SIMD operations in parallel in a cycle; each such instruction has multiple parts of course).

                      Comment

                      Working...
                      X