Announcement

Collapse
No announcement yet.

Apple M2 vs. AMD Rembrandt vs. Intel Alder Lake Linux Benchmarks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by Swifty Rust View Post
    Results are correct. Cross-compile time is nearly identical.
    It's hard to judge just by that benchmark, do you have anotherone?

    Also nothing wrong, if the M2 has double the compile speed all the better. I just wan't to make sure that both plattforms are doing the same and we are not comparing bananas and apples.

    Comment


    • Originally posted by Anux View Post
      > I should note here that for our Chromium compile, we benchmarked compiling the Windows version on Windows, and the Mac version on macOS
      Okaaayy, did they at least use the same compiler and settings? They are really scarce on info about test setup there.

      Edit: I read that LLVM has less optimizations for ARM, that would explain it beeing much faster. A cross compile on either plattform would give an indication.
      What a pile of shit, really (I mean what you read). People have been claiming llvm is making Apple shine because llvm uses advanced optimizations and so cheats to make it look like better than x86. Do you have a link to sustain that?

      Comment


      • Originally posted by arQon View Post

        What part of "one in a thousand" confused you? Or was it all the words calling M1 "a 'meaningful' moment of progress" and such?

        ffs...
        Your argument doesn't make any sense then, you were saying before that Apple M1/M2 isn't for "real work" even though (at least in the laptop space) its better in real work than its competitors. There are of course some exceptions but you are giving the impression that the Apple M1/M2 is like a chromebook or something.

        Originally posted by ldesnogu View Post
        What a pile of shit, really (I mean what you read). People have been claiming llvm is making Apple shine because llvm uses advanced optimizations and so cheats to make it look like better than x86. Do you have a link to sustain that?
        This whole preposition is hilarious considering that x86 has had literal decades head start in terms of optimisation work and even man hours and as we see with Asahi Linux (that is compiled with gcc, not llvm) programs run even faster than the MacOS compiled equivalents.

        Even initially LLVM was optimised primarily for x86/64 (back then Macbook's were Intel based) and LLVM is open source anyways, which means any such "magic" optimisations can also be implemented into GCC.

        You know that people are desperately scraping the bottom of the barrel when they start arguing that llvm has "magic" optimisations for Apple hardware.
        Last edited by mdedetrich; 11 August 2022, 06:48 AM.

        Comment


        • Originally posted by Anux View Post
          Secondly there are multiple reasons on the ISA/silicon level why x86 is worse, x86 is an old ISA
          As is ARM.
          AArch64 (the only ISA Apple Mx chips support) is much more recent. It was announced in 2011.

          No ARMv8.5 (the base of Apples M chips) is backwards compatible to ARMv5.
          No it isn't. Even ARMv7 was not compatible with ARMv5. See https://developer.arm.com/documentat...406/cd?lang=en

          Comment


          • Originally posted by ldesnogu View Post
            AArch64 (the only ISA Apple Mx chips support) is much more recent. It was announced in 2011.


            No it isn't. Even ARMv7 was not compatible with ARMv5. See https://developer.arm.com/documentat...406/cd?lang=en
            Yup another example of someone not knowing what they are talking about. The M1/M2 chips only support AArch64 (there is no 32bit support). ARMv8 is backwards compatible with ARMv7 but since the Apple SOC's don't have the optional 32bit support its kind of a moot point.

            Comment


            • Originally posted by mdedetrich View Post
              This whole preposition is hilarious considering that x86 has had literal decades head start in terms of optimisation work and even man hours and as we see with Asahi Linux (that is compiled with gcc, not llvm) programs run even faster than the MacOS compiled equivalents.

              Even initially LLVM was optimised primarily for x86/64 (back then Macbook's were Intel based) and LLVM is open source anyways, which means any such "magic" optimisations can also be implemented into GCC.

              You know that people are desperately scraping the bottom of the barrel when they start arguing that llvm has "magic" optimisations for Apple hardware.
              I think I did not make my point clear

              Many people claim Apple have a cheating compiler (and even if llvm is open source, the version Apple uses isn't open sourced, even though they often give back code) and that's why it made it look better than x86 (I saw these silly claims starting with Anandtech results of SPEC on M1 chips). This has never been proved (contrary to Intel cheating with icc for instance).

              But llvm/clang is already very well tuned for AArch64. Apple and Arm, among others, have worked on it for years. I have little doubts it's already using as expensive optimizations as x86 (after all most of the optimizations it uses are on its intermediate SSA-based representation, hence target independent). So compiling with llvm on x86-64 or on AArch64 should be about as expensive.

              That being said I agree with Anux that to make a fair comparison they should at least compile the same code base, and ideally cross-compile for the same target.

              Given all the results I've seen, I have little doubt that'd confirm the benchmark results: Apple Mx machines shine at compiling stuff.

              Comment


              • Originally posted by ldesnogu View Post
                I think I did not make my point clear

                Many people claim Apple have a cheating compiler (and even if llvm is open source, the version Apple uses isn't open sourced, even though they often give back code) and that's why it made it look better than x86 (I saw these silly claims starting with Anandtech results of SPEC on M1 chips). This has never been proved (contrary to Intel cheating with icc for instance).

                But llvm/clang is already very well tuned for AArch64. Apple and Arm, among others, have worked on it for years. I have little doubts it's already using as expensive optimizations as x86 (after all most of the optimizations it uses are on its intermediate SSA-based representation, hence target independent). So compiling with llvm on x86-64 or on AArch64 should be about as expensive.
                Oh your point was crystal clear, when I said "This whole preposition is hilarious" I wasn't aiming it at you but rather other people that are somehow claiming there are magic optimisations in Apples LLVM fork that only apply to Apple's silicon which is not the case (as evidenced by the fact that gcc also performs incredibly well on Apple's M1/M2).

                As you pointed out, the only company that did this was ironically Intel with x86 ICC which was a big deal with AMD. Apple's silicon is standard AArch64 ARM and while in the past there may have been optimizations that Apple added to their forked LLVM this was like a decade ago and pretty much every AArch64 compatible mainstream compiler is within a percentile difference of eachother.

                Originally posted by ldesnogu View Post
                That being said I agree with Anux that to make a fair comparison they should at least compile the same code base, and ideally cross-compile for the same target.

                Given all the results I've seen, I have little doubt that'd confirm the benchmark results: Apple Mx machines shine at compiling stuff.
                Exactly, but Anux is still wrong in this case. There are benchmarks with exact same targets and Apples M1/M2 still blows the competition out of the water. And in my case I am dealing with a platform independent bytecode (JVM) being compiled by scalac/javac with OpenJDK and the difference is night and day, nothing can compete.
                Last edited by mdedetrich; 11 August 2022, 07:21 AM.

                Comment


                • Originally posted by ldesnogu View Post
                  What a pile of shit, really (I mean what you read).
                  What about you read what I wrote before making a comment and not argument against me with things that someone else on the internet said? Maybe also try to not insult but use real arguments.
                  People have been claiming llvm is making Apple shine because llvm uses advanced optimizations and so cheats to make it look like better than x86. Do you have a link to sustain that?
                  And why should I care what people are claiming? And proof it with a link? Whats wrong with you?

                  Let me try it really slow and only with basic vocabulary. I have read somewhere in the net that LLVM has less optimizations for ARM than for x86. Less is a different word and has nothing to do with magic.
                  Is it true? I don't know, but it would explain what we saw in that benchmark. Is it because ARM code needs less optimizations to get the same performance? Could very well be.

                  This would be easily shown with a benchmark that compiles the same code fore the same plattform. But somehow no one can provide a link to it.

                  Originally posted by ldesnogu View Post
                  No it isn't. Even ARMv7 was not compatible with ARMv5. See https://developer.arm.com/documentat...406/cd?lang=en
                  Dude your document has 3000 pages, give atleast a page number, I won't read that whole thing only to see that your wrong. Also ARM officals say otherwise: https://community.arm.com/support-fo...te-and-armv7-a

                  But maybe a simple logical thought is enough to convince you:
                  If v8 is compatible with v7 and v7 is compatible with v6 and v5, what implication lies therein?

                  Originally posted by mdedetrich View Post
                  Yup another example of someone not knowing what they are talking about.
                  Exactly and like the many other times it was easily disputed with a short google search. I don't know why those "ARM is the best!!!"-guys wan't to disgrace themselfs in public.

                  The M1/M2 chips only support AArch64 (there is no 32bit support). ARMv8 is backwards compatible with ARMv7 but since the Apple SOC's don't have the optional 32bit support its kind of a moot point.
                  Thats why I said the M1/2 is not really an ARM compatible chip, its just a proprietary implementation of AArch64.

                  There are benchmarks with exact same targets and Apples M1/M2 still blows the competition out of the water.
                  And again a simple link to only one of those would stop the whole disscussion about this point but you don't seem to be interested in that?

                  Comment


                  • Originally posted by Anux View Post
                    But maybe a simple logical thought is enough to convince you:
                    If v8 is compatible with v7 and v7 is compatible with v6 and v5, what implication lies therein?
                    Dude, just stop. This is completely wrong and you already indirectly admitted that you don't have a lot of knowledge on the matter.

                    You cannot run ARM 5/6 on Apple M1/M2 full stop. The binary will just not run, the silicon doesn't support it whatsoever. The only way to run such a binary is by using software emulation but thats not proving anything (thats like saying an x86 supports PowerPC/Cell because you are running a Playstation 3 emulator on it).

                    Originally posted by Anux View Post
                    Thats why I said the M1/2 is not really an ARM compatible chip, its just a proprietary implementation of AArch64.
                    This statement makes no sense. Firstly ARM doesn't make chips, they are in the business of designing ISA's and Apples M1/M2 is fully conformant to the AArch64/ARM8 ISA. Apple had to pay ARM a license to make such a chip and if they deviated in a different way (as you are implying when you say "proprietary") they would have massive legal problems. If you have an ARM license and you make a chip that claims to be ARM its quite black and white, it must be compliant.

                    I also don't know what you are trying to say when you mean "proprietary", the whole point of ARM is that its just an ISA and that you have other companies building chips that implement that ISA.

                    Originally posted by Anux View Post
                    And again a simple link to only one of those would stop the whole disscussion about this point but you don't seem to be interested in that?
                    Considering you don't know what you are talking about and are not arguing in an intellectually honest manner I am not surprised that anyone hasn't bothered.
                    Last edited by mdedetrich; 11 August 2022, 09:05 AM.

                    Comment


                    • Originally posted by mdedetrich View Post
                      You know that people are desperately scraping the bottom of the barrel when they start arguing that llvm has "magic" optimisations for Apple hardware.
                      Nobody cares about general purpose optimizations. We all know Apple (and ARM) are dogshit there, except Apple overloaded it with cache to "magically" do better on memory intensive tasks because the ISA is so bad.

                      But no, that was about using their stupid accelerators, which has nothing to do with the ISA but idiots will come in and say it's ARM doing the whole thing etc. Like the neural processing accelerator for instance. Retards everywhere.

                      Comment

                      Working...
                      X