Announcement

Collapse
No announcement yet.

Apple M2 vs. AMD Rembrandt vs. Intel Alder Lake Linux Benchmarks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by mdedetrich View Post
    This is completely wrong
    So an official ARM dev is lying about compatibility? Why? Anything to back that up apart from insults?

    You cannot run ARM 5/6 on Apple M1/M2 full stop.
    What are you on about, no one ever said that. Are my comments that hard to read/understand? Is my english that bad?

    This statement makes no sense.
    Didn't you just write a few lines above that M1/2 can't execute ARM v7. Make up your mind please, you look really ridiculous if you contradict yourself in the same comment.

    Firstly ARM doesn't make chips
    Yes, and? No one ever said that. Whats wrong with you? I get the feeling your just a troll that deliberatly is missreading what I write.

    Apples M1/M2 is fully conformant to the AArch64/ARM8 ISA.
    Dude you're hopless, you jump between contrarys like you're on a trampoline.

    ARMv8:
    optional: AArch64
    mandatory: ARMv7 compat. (implying v6, v5 and AArch32)

    ARMv9:
    optional: AArch32
    mandatory: AArch64

    It's all documented on wikipedia and arm.com, but they are probably all Apple and ARM haters and lying about it or have no clue. https://developer.arm.com/Architectu...20Architecture

    Apple had to pay ARM a license to make such a chip and if they deviated in a different way (as you are implying when you say "proprietary") they would have massive legal problems. If you have an ARM license and you make a chip that claims to be ARM its quite black and white, it must be compliant.
    Aren't you the one claiming I don't know what I'm talking about? There is more than one way of licencing, which kind of licence Apple has is now for you to research.

    I also don't know what you are trying to say when you mean "proprietary"
    Exactly what it means, Apple has implemented an ISA that is not 100% compatible to ARMv8 and therefore not adhering to the ARMv8 standard.

    Considering you don't know what you are talking about and are not arguing in an intellectually honest manner I am not surprised that anyone hasn't bothered.
    Strange that this comes from someone that has made false claims multiple times and I easily disputed them with facts and links. What about showing whats wrong in my links and backing it up with better links? But yeah I guess "that's not how science works" you have to ignore arguments for real science.

    The trolling is getting a little to much, from now on I will only respond to resonable and backed up arguments. And the rest will remain on that site to disgrace you guys.

    Comment


    • Originally posted by Anux View Post
      So an official ARM dev is lying about compatibility? Why? Anything to back that up apart from insults?
      No you just couldn't read what he said correctly. A physical ARM chip can support as many ARM variants as it wants, just as an x86/64 processor can also support x86 32 bit if it wants. It just so happens that the Apple M1/M2 only supports ARM8/7

      Originally posted by Anux View Post
      What are you on about, no one ever said that. Are my comments that hard to read/understand? Is my english that bad?
      Yes

      Originally posted by Anux View Post
      Didn't you just write a few lines above that M1/2 can't execute ARM v7. Make up your mind please, you look really ridiculous if you contradict yourself in the same comment.
      The ARM 8 ISA is broadly backwards compatible with ARM 7 but the M1/M2 wont run ARM binaries before 7

      Originally posted by Anux View Post
      ARMv8:
      optional: AArch64
      mandatory: ARMv7 compat. (implying v6, v5 and AArch32)
      M1/M2 cannot run AArch32 or v6 or earlier


      Originally posted by Anux View Post
      Exactly what it means, Apple has implemented an ISA that is not 100% compatible to ARMv8 and therefore not adhering to the ARMv8 standard.
      Wrong, otherwise they wouldn't even have an ARM license. AArch64 doesn't mandate 32 bit support, its optional

      Originally posted by Anux View Post
      Strange that this comes from someone that has made false claims multiple times and I easily disputed them with facts and links. What about showing whats wrong in my links and backing it up with better links? But yeah I guess "that's not how science works" you have to ignore arguments for real science.
      Step 1. Write hello world
      Step 2: Compile it with a C compiler that supports ARM, designate target as Arm 6 (or earlier) in the architecture triple. Since this version of ARM is deprecated you will probably have to find some old compiler/GCC version
      Step 3: Try it run it on M1/M2
      Step 4: Observe how it won't run.

      Originally posted by Anux View Post
      The trolling is getting a little to much, from now on I will only respond to resonable and backed up arguments. And the rest will remain on that site to disgrace you guys.
      You should just stop responding in general.
      Last edited by mdedetrich; 11 August 2022, 10:40 AM.

      Comment


      • Originally posted by Anux View Post
        What about you read what I wrote before making a comment and not argument against me with things that someone else on the internet said? Maybe also try to not insult but use real arguments.
        You obviously have reading comprehension: I made it explicit that what I called shit is what you read, not what you think. So either you don't understand what you read, or you're here to argue without any argument.

        The rest of the posts proves one thing: it's you who is here to insult others. Nowhere did I insult you.

        And why should I care what people are claiming? And proof it with a link? Whats wrong with you?
        You are the one saying that others people say this or that and then say you don't care?

        Let me try it really slow and only with basic vocabulary. I have read somewhere in the net that LLVM has less optimizations for ARM than for x86. Less is a different word and has nothing to do with magic.
        Is it true? I don't know, but it would explain what we saw in that benchmark. Is it because ARM code needs less optimizations to get the same performance? Could very well be.
        Where did you read that? Or you don't care?

        Dude your document has 3000 pages, give atleast a page number, I won't read that whole thing only to see that your wrong. Also ARM officals say otherwise: https://community.arm.com/support-fo...te-and-armv7-a
        You're not able to read the table of contents? Or to use the search function of a PDF reader? So I will give you some hints: unaligned memory accesses. I will even give you a section number D15.3.1.

        But maybe a simple logical thought is enough to convince you:
        If v8 is compatible with v7 and v7 is compatible with v6 and v5, what implication lies therein?
        Given ARMv7 is not compatible with ARMv5, your logic falls down.

        Comment


        • Originally posted by Anux View Post
          So an official ARM dev is lying about compatibility? Why? Anything to back that up apart from insults?
          A support guy from Arm got it wrong, it happens.

          ARMv8:
          optional: AArch64
          mandatory: ARMv7 compat. (implying v6, v5 and AArch32)
          That's wrong, cf. my previous answer.

          ARMv9:
          optional: AArch32
          mandatory: AArch64
          You got it wrong: ARMv9 only has optional support for user mode AArch32 (EL0). It can never have full AArch32 system support (read: you won't run a 32-bit kernel on an ARMv9 CPU).

          EDIT: I forgot to say ARMv8-A Cortex-A77 and A78 don't support AArch32 beyond EL0. So no 32-bit kernel either. And no they are not ARMv9.

          The Arm Cortex-A77 CPU is the third-generation premium core built on DynamIQ technology.

          The Arm Cortex-A78 CPU is the fourth-generation premium core built on DynamIQ technology
          Last edited by ldesnogu; 11 August 2022, 11:01 AM.

          Comment


          • Originally posted by ldesnogu View Post
            Where did you read that? Or you don't care?
            I think it was stackoverflow but can't find it right now. Witch is not the end because I never claimed it to be true. I made a hypothetical guess and mentioned it to explain my way of thinking.

            You're not able to read the table of contents?
            You mean the one that has nothing about missing backwards compatibility in it?

            unaligned memory accesses. I will even give you a section number D15.3.1.
            So a section describing how v4/5 and v6 legacy behave different from v6 and v7 on an ARMv7 chip proves its not backwards compatible?

            Originally posted by ldesnogu View Post
            A support guy from Arm got it wrong, it happens.
            Stop being ridiculous, that guy worked for Arm's Application engineering group. He writes actual ARM code, not just a call center dude.

            I guess another link from the net won't convince you either? http://landley.net/aboriginal/architectures.html#arm

            Or maybe an example, do you have a RasPi?
            I've got the Pi3 (v8 with 64 bit) and I can install a ARMv6 compiled Raspbian, Michael did a test here maybe 2 years ago so you don't have to take my word for it.

            Edit: Sorry the test was v7 vs 64 bit https://www.phoronix.com/review/raspberrypi-32bit-64bit but if you have a RasPi >= 3 or know someone you can download the old raspbian vor v6 and run it.

            You got it wrong: ARMv9 only has optional support for user mode AArch32 (EL0). It can never have full AArch32 system support (read: you won't run a 32-bit kernel on an ARMv9 CPU).

            EDIT: I forgot to say ARMv8-A Cortex-A77 and A78 don't support AArch32 beyond EL0. So no 32-bit kernel either. And no they are not ARMv9.

            https://developer.arm.com/Processors...Specifications
            https://developer.arm.com/Processors/Cortex-A78
            You're right, so my claim that Apples M1/2 is not standard ARM v8 will only hold true if they didn't implement A32 and T32 at EL0. Some guys on the net say no: https://news.ycombinator.com/item?id=27277351 but i can't find a definitive answer on Apples dev-sites.
            Last edited by Anux; 11 August 2022, 05:58 PM.

            Comment


            • Originally posted by mdedetrich View Post
              Again wrong, I am compiling Java programs where the output are platform independent Java byte code (i.e. .class files) and my M1 pro is around 3-11 times faster compared to a Thinkad T14s Gen2/Carbon X1 Gen 9.
              What CPU's are in those Thinkpads? One of the options is a i7-1280P which is a 14 core CPU. Which would be a closer match to the M1 Pro, particularly the 16" model.
              If you actually have the machine (evidently you dont) you can see how ridiculous the margin is when it comes to compiling code.
              I don't buy mistakes like Apple products.

              Comment


              • Originally posted by Dukenukemx View Post
                What CPU's are in those Thinkpads? One of the options is a i7-1280P which is a 14 core CPU. Which would be a closer match to the M1 Pro, particularly the 16" model.
                T14s is an 8 core 16 thread AMD CPU, still no match. Also my M1 Pro is 14", not 16" and its fans almost never turn on (even when compiling code).

                To be blunt, my M1 pro in terms of compiling code is competing against 5950x desktop CPU.

                Originally posted by Dukenukemx View Post
                I don't buy mistakes like Apple products.
                If you classify the M1/M2 as a mistake then Intel/AMD products are a catastrophe.

                Comment


                • Originally posted by mdedetrich View Post

                  T14s is an 8 core 16 thread AMD CPU, still no match. Also my M1 Pro is 14", not 16" and its fans almost never turn on (even when compiling code).

                  To be blunt, my M1 pro in terms of compiling code is competing against 5950x desktop CPU.
                  So what are you compiling for exactly?
                  If you classify the M1/M2 as a mistake then Intel/AMD products are a catastrophe.
                  Sure.

                  Comment


                  • Originally posted by drakonas777 View Post
                    I am not saying that ARM ISA is not a contributing factor for M1/2 properties. I'm saying it's not the "practically single", most important contributing factor.

                    Listen guys, instead arguing I suggest to do an experiment in the future. Hear me out. Lets wait for the Intel N4 / TSMC N5/N4 x86 SoC to emerge, so that we would have comparable node. When, let's wait for the last CPU SKUs within these nodes, to have the latest and most advanced x86 core in it. I guess it will be ZEN4+/ZEN5 for AMD and Arrow/Lunar lake for Intel. OK, after this happens let's take one of those Intel/AMD APUs, which has die space/transistor count/etc (matter of discussion) used as close as possible to match with say, M2 right. After that, let's install as close Linux distros on them, as possible and let's disable all the accelerators if any. Ant then lets do some testing on pure general purpose cores, multi threaded workload at the same power limit. Ant then we will see how M2 ARM is destroying those x86, OK? LOL
                    you will not be able to compare amd 5nm chips with the apple soc.... why? simple the amd 5nm cpus will have 6nm IO die... and then again you can claim it is not ARM ISA instead it is the 6nm node of the IO die.

                    i say this: the nm node does not matter at all all what does matter are the products out in the hand of the people.
                    Phantom circuit Sequence Reducer Dyslexia

                    Comment


                    • Originally posted by Dukenukemx View Post
                      Because that's how we have made chips faster and more efficient by making the transistors smaller.
                      this is plain and simple wrong. modern nodes do not do the transistors smaller instead they build 3D structures to put more transistors on the same 2D area means the density goes up without the transistors go smaller.

                      if you want classic 2D tranistor node you maybe get some 28nm or 22nm or the very very best is 14nm or if you are lucky 12nm...

                      intels 10nm node is so bad because intel startet the 3D stuff long time ago it was AMD bulldozer on 34nm node who was 2D tranistors... and at the same time intel only had 45nm in 2D speaking but in 3D world it had the density of a 28/22nm node

                      it was from this time one over 10 years ago all hightech nodes DID NOT MAKE THE TRANISTORS SMALLER instead they did build 3D structures.

                      now lets the the most extrem variant to know today in 2022... it is the IBM 2nm node... if you put it into a microscope no element is smaller than 6-7nm... if you measure it with classic 2D logic...
                      so how they get 2nm out of 7nm structures ? its simple it has three 3D layers... every tranistor has 3 tranistors stagged over each other..

                      this 3D stagged tranistor stuff is very complicated and very expensive thats why projects like libre-soc for example more likly aim for classic 2D nodes on 12nm/14nm/22nm/28nm....

                      these 3D stacked high density nodes have problem with Dark Silicon...


                      in the classic 2D design world there where chips without any Dark Silicon...

                      but the more 3D the nodes go the more Dark Silicon is there means you can not fire up all tranistors at the same time in the same area because then it would destroy the chip because of to high electrical electron flow ""Ampere'""

                      because of this Dark Silicon problem is is very expensive to develop for these 3D high-tech nodes like 2nm or 3nm or 4nm or 5nm...

                      and we will see many projects like libre-soc stay on these outdated 2D nodes because they avoid this cost explosion desaster of this Dark Silicon problem...
                      Phantom circuit Sequence Reducer Dyslexia

                      Comment

                      Working...
                      X