Announcement

Collapse
No announcement yet.

Apple M2 vs. AMD Rembrandt vs. Intel Alder Lake Linux Benchmarks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by Dukenukemx View Post
    That sounds like 3D stacking.
    I'm not gonna pretend I fully understand what goes on in chip manufacturing but this is the first I've ever heard this. As far as I know a lower nm means a smaller transistor.
    stop talking nonsense: "As far as I know a lower nm means a smaller transistor."

    just go back to th e moment all chips where 2D chips it was on AMD side 34nm SOI node and intels 45nm node.

    "this is the first I've ever heard this."

    well not many people are sooooooo deep in this topic.
    and i am a person who do not copy other people what means talking with me is unique.

    at 45nm intel discovered that the price to make the transistors smaller goes up very fast and to the point that the node becomes to expensive for any product. then they did go a different direction.
    similar to 100gb BDXL blue ray discs they did start to use multible layers it is like the 3D stacking of the 5800X3D but they do not stack these final chips instead they make the die in a way that tranistor is over another tranistor in 3D.

    as you know at the time of AMD bulldozer with 34nm intel could easily beat AMD because they managed to make their 45nm node look like a 22/28nm node if you see it in the meaning of density in 2D

    intel hat 2 problems: higher real nm node means less clock speed and also bigger tranistors means more heat.

    but they did fix both with ISA war and plain and simple bigger chips with more tranistors and lower-clock speed.

    remember an bulldozer on 34nm in OC could easily beat any intel cpu in clock speed.

    Originally posted by Dukenukemx View Post
    Though there's many ways to measure the size of the transistor, and it also doesn't mean that goes from 14nm to 7nm means you get a 50% smaller chip.
    all the companies like intel did in fact stop naming the nodes after any kind of real messurment under the microscope... you could do it on amd 34nm SOI and intel 45nm after that nothing what you could messure had anything to do with the names they put on a node.

    today all do it in the same way: they count the amount of transistors on 1mm² and then they calculate it like it is a 2D node.... this means IBM 2nm node has the density of a 2nm 2D node but because it is 3D design nothing of this node under a microscope will have any size of 2nm...

    Originally posted by Dukenukemx View Post
    Going by TSMC they said 3nm process can reduce power consumption by up to 45%, improve performance by 23% and reduce area by 16% compared to 5nm. They said this, which is probably why Apple and Intel bought a ton of 3nm for the next couple of years. Though now Intel may drop 3nm due to delays. It seems TSMC's 3nm is yet again delayed, and Intel can't afford to wait. The article did say that Meteor Lake without 3nm would be slower, but this also means no Apple Silicon running on 3nm next year. Intel has decided to go with TSMC's 5nm instead. This might also explained why AMD had no plans to use 3nm next year, but instead stuck with 5nm and 4nm.
    this has all nothing to do with the topic i want to tell you. if you search for cost explosion for design 2nm/3nm/4nm/5nm chips because is is 3D design with stagged tranistors "all-around-gates"
    Dark silicon is such a big problem with these modern notes that these companies need to produce and sell these products of these nodes longer just to make sure the high costs will come in again from the sales.

    for example apple did drop 3nm/4nm for iphone chips because the costs are so high the price of a smartphone would go out of control...

    i found links for you:
    " Intel reinvents transistors using new 3-D structure for 22 nanometer chips"
    Intel Corporation today announced a significant breakthrough in the evolution of the transistor, the microscopic building block of modern electronics. For the first time since the invention of silicon transistors over 50 years ago, transistors using a three-dimensional structure will be put into high-volume manufacturing.


    this is an article from 2011 it is the history in how intel did defeat AMD
    in the mening of 2D planar transistors AMD had the better node but intel did manage to make their inferior 45nm node look like it is a 22/28nm node.

    3D requires radical changes in the way devices are designed, necessitating new deposition and etch approaches.


    and nother article about building tranistors on wavers in 3D to get more density

    file:///home/q/Downloads/applsci-07-01047.pdf

    this is an PDF about do CMOS tranistors going from 2D to 3D to get maximum of density

    Phantom circuit Sequence Reducer Dyslexia

    Comment


    • Originally posted by Dukenukemx View Post
      Did you know Apple will void warranty if you smoke? I don't blame them because smokers dust is awful to clean. I have to literally wash the motherboard with soap and water. You can't blow it clean.
      well yes its awful... but if you are a smoker you better buy a passiv cooled system.
      also this warranty problem with apple was a active cooled hardware not apssive cooled.
      for smokers it is still best to just buy a passive cooled device.

      Originally posted by Dukenukemx View Post
      You think so but that's until you actually run a game on them.
      well i run my smartphones for years and never had a problem with that.
      but of course technically speaking you need a water cooled smartphone to avoid thermal problems.

      Originally posted by Dukenukemx View Post
      I didn't say I can't fix Apple products, and there's a lot of stupid things that goes wrong in Apple products. I don't do board repairs like Louis Rossmann, though I will do it if I knew which component failed. Most of the time it's a broke screen or a bad battery, but there are times when I gotta ask, "WTF Apple"? Usually resulting in me having to find a new motherboard which usually ends up costing so much the customer doesn't want to fix it. No liquid damage, just blown motherboard. This is why I watch a lot of Louis Rossmann videos so I can try and diagnose a bad board and fix it.
      well yes i unterstand. i do try to avoid such products i buy 100â‚Ĵ cheap smartphone i use it for many years
      my PC is made by easily replaceable hardware parts.
      these notebooks who can not be fixed i just avoid them.
      i discoverd in my life that the best repair you can have it to avoid such products.
      i have professsional training 1 year in electronic and 3 years in electric and i tell you something: i could not repair any of these products like your Louis Rossmann example...
      this does not mean that i did not learn anything in my professional training but if you can avoid it you should avoid it. and i know you can not avoid it because you earn your money with repair this stuff.

      Originally posted by Dukenukemx View Post
      That's not to say that Apple is the only bad manufacturer. HP/Compaq laptops had a number of bad solder joints in the early 2010's just like the Xbox 360. Lenovo recently had a slew of problems that resulted in dead motherboards in their laptops. My guess would be a lack of cooling as the VRMs on the boards have a thin plate of aluminum to draw heat.
      my last x86 HP notebook had a trojan horse inside of the UEFI Bios and even a complete reinstall of fedora did not remove it. so you talk about: "bad solder joints".,.. but this laptop rund for years and even today i did give it away for poor people. just to get rid of the trojan horse inside of the device.

      do you remember geforce 9500 notebook desaster ? i had a laptop with nvidia gpu and it died because of bad solder of the gpu...

      but to be honest i do not fear: bad solder joints if this is the case i just buy a new one.
      but what i really fear is Trojan horse inside the UEFI Bios ...

      Originally posted by Dukenukemx View Post
      What games have you been playing? Cyberpunk 2077 eats some serious ram. That depends on the game but some games don't while some do. Elden Ring just loading the game up will use 4GB, though it does recommend 12GB of system memory.
      i have 280 games in my steam account.. most of the time i do not have time to play games.
      if you want an example of game who use a lot of cpu power any RTS with many units does it like homeworld2.
      or if you want multicore example ashes of singularity ...

      but i like the technology...

      Originally posted by Dukenukemx View Post
      The PS4 OS reserved 2.5GB of ram, so it's probably safe to assume the PS5 will also reserve at least 2.5GB for the OS. Also, the PS5 uses 1GB of ram for 4k texutres just for the UI. Also also, it reserves another 1GB of ram for apps. Whatever is left over for game textures isn't 12.5GB. Assuming 16GB minus 4GB of OS+reserves, results in 12GB for the game. If you were playing Elden Ring then 4GB is used for the game engine, resulting in 8GB for textures.
      it does not matter if it is 1,5gb or 2gb or 2,5 gb you always end up to the fact that PS5 has more vram than the 6gb vram card.

      "If you were playing Elden Ring then 4GB is used for the game engine, resulting in 8GB for textures."

      and this proof my point exactly ... your 6gb vram GTX1060 can not hold the textures and because of this the result will always be different.

      now you can say 2gb vram difference does not make it a big different and yes many people can life with it.
      but to claim the 1060 can have the same visual result of the game as a PS5 is plain and simple wrong.
      we can say this for a vega64 with 8gb vram but even this is not true because of raytracing.

      Originally posted by Dukenukemx View Post
      The Xbox Series X which is arguably more powerful has 10GB of RAM at high speed and 3.5GB at standard speed. Which means most likely 10GB is reserved for textures, and also Microsoft does a better job with relieving their ram than Sony.
      yes i know the PS5 is better in doing desktop work like browsing the internet because of more ram can be used... but the xbox looks better in games.

      but i did read it multible time that PS5 is more easy to develop for it compared to the xbox who looks more like standard PC with ram for cpu and vram for gpu.

      i think the PS5 does win in the end because of the texture streaming co-processor... who loads textures directly from the SSD sidechanneling the cpu.

      Originally posted by Dukenukemx View Post
      Also remember that the CPU and GPU have to fight over that ram and bandwidth, which means it's even slower. This is also the same problem with Apple's Silicon, and probably why it isn't at RTX 3090 levels of performance despite the insane amount of bandwidth
      i think in the near future SOCs will win big even against RTX3090 style hardware.

      but the SOCs of today are not here yet... but remember like the AMD 5800X3D
      as soon as they do 3D stacking memory/cache and do more tranistor 3D stacking in 2nm
      and so one and so one in the end classic normal hardware like a 12900K+RTX3090 will lose.

      the apple m2 SOC will go from 20 billion tranistors to 60 billion tranistors in the next 2 years...

      also apple will start to 3D stack cache directly on the SOC chip.

      Originally posted by Dukenukemx View Post
      You have the same GPU as me, but with 8 more streaming cores. Also, what makes you think I'm using a Vega 56 forever? I'm waiting for the GPU market to crash to pickup a 6600XT.
      the 6600xt is only like 7% faster than the vega64... and for raytracing the 6600xt is to slow.

      dont waste your money like this... buy a AMD radeon 7600XT instead.

      Originally posted by Dukenukemx View Post
      No it won't. The GPU is faster but that won't make a difference in gaming. The CPU you have is first gen Ryzen while I have second gen or known as Zen1+.
      i tell you something why the cpu is NEVER the bottleneck and the GPU is always the bottleneck:
      I have a 4K 65" display. in all games i play the vega64 is what limit the FPS and the cpu never touch 100%
      thats why your 7% faster cpu in singlecore performance does not matter at all at 4K gaming.
      its always the bottleneck what is the problem and at 4K it is always the vega64 what is to slow.

      i think with your vega56 you are not a 4K gamer... you maybe game on 2K

      Originally posted by Dukenukemx View Post
      How's your ram speed? I have 3000Ghz ram that I tweaked the memory settings in the bios. I also have a Ryzen 7 1700 which is first gen and getting 3000Mhz ram to work is difficult. Much worse if you have 4 sticks of ram. The 2700X has only 2 sticks, which makes reaching 3000Ghz easy. The 1700 I have is overlocked to 3.8Ghz, but the ram is also 2600Ghz. I also put a huge copper heatsink on the VRMs to keep them extra cool.
      i have 3200mhz ram but i am sure it runs slower if i check it in bios. but it does not matter at all.
      at 4K the GPU is the bottleneck and not the ram and not the cpu.

      "Much worse if you have 4 sticks of ram."
      man you are funny.... i have 8 sticks of ram.. and these ram are only half-side ram means the full version means it is 256GB ram for my system...

      "The 1700 I have is overlocked to 3.8Ghz"

      my system is water cooled and has 2000W PSU and expensive OC mainboard i could easily overclock it
      but the joke is: i have no reason to do so because the system is faster than i need it.

      Originally posted by Dukenukemx View Post
      Games now use as much as 6 cores, and even then that's not super needed. Most games will run fine on 4 cores. My 8 core is overkill. Your 12 core is super overkill.
      again it depend on the game... with ashes of singularity you can shot down a 32core threadripper if you want.
      the reason is simple it is a mandle/vulkan game and you can put as much units on the map as you like.
      and then at 5000 or 20 000 units the cpu will go down. and of course it is perfectly multithreaded..

      Originally posted by Dukenukemx View Post
      You're using ECC which is slower. Good for servers, not so much for gaming. Also I doubt you have 2 sticks of 64GB ECC ram. What clock speed do you even run the ram at?
      ECC is slower yes but again at 4K gaming the cpu or ram is not the bottleneck.

      also if you want to use hardware a long time non-ECC ram maybe break faster and ECC ram will run a long time even if the ram starts to produce errors over time .

      "Good for servers, not so much for gaming."

      you should rethink this deeply if you OC normal RAM who do not have ECC you will not get error message if your ram starts to fail because of to much OC.
      only ECC ram can be OC securely because as soon as you clock it to high there will be error messages and even of this is the case most of the time the ram will work without problem because of ECC.

      "Also I doubt you have 2 sticks of 64GB ECC ram"

      it is 8 dimms every dimm has 16gb ...

      "What clock speed do you even run the ram at?"

      i need to go to bios to check this. but the possibility is high that the ram do not operate at 3200mhz.

      Originally posted by Dukenukemx View Post
      Who cares, my motherboard was only $50 remember? B550 motherboards are cheap and plentiful, but I have no intention to upgrade anytime soon. ASRock B550 Phantom Gaming for $95 and that's with current pricing. Besides, AMD is gonna release new CPU's which means AM5 is the future. Patience is a virtue.
      Doesn't matter to me since I use Linux Mint. Which means my games run on top of Vulkan through DXVK.
      well of course my system and your system is not the same level in money speaking.
      my systems are already 5 years old and i am sure i will use them many more years.
      so the cost of the systems are relativ over this long time.
      Phantom circuit Sequence Reducer Dyslexia

      Comment


      • Originally posted by Anux View Post
        I think it was stackoverflow but can't find it right now. Witch is not the end because I never claimed it to be true. I made a hypothetical guess and mentioned it to explain my way of thinking.
        And I agreed to your conclusion that a fair comparison would be to cross-compile the same code base (though I'd add you would have to compile the cross-compiler itself on both machines, not all distributions use the same flags when building software).

        You mean the one that has nothing about missing backwards compatibility in it?

        So a section describing how v4/5 and v6 legacy behave different from v6 and v7 on an ARMv7 chip proves its not backwards compatible?
        I'm speechless. I show you a section that explicitly explain how ARMv4/v5 instructions behave differently from ARMv7 and you still claim compatibility is ensured?

        Note that ARMv6 had a compatibility bit to enable the same behavior as ARMv5, but it was removed from ARMv7 thus making some ARMv5 software not portable to ARMv7 and hence ARMv8.

        Here is an interesting document describing this difference (and others): https://www.riscosopen.org/wiki/docu...ility%20primer

        Stop being ridiculous, that guy worked for Arm's Application engineering group. He writes actual ARM code, not just a call center dude.
        I hate to use that kind of argument because it proves nothing but since you started with the credential stupidity, I've been working for Arm for 17 years including some years in the design team of AArch64. And giving how thick headed you are, here you go: https://sourceware.org/pipermail/bin...st/077788.html
        From that you can find me on LinkedIn and see what I do. And that doesn't prove I'm always right, I'm not. It just proves your argument is stupid or given my work you'll have to admit that I'm always right

        (And I'm not speaking for Arm, all of my posts are personal opinions.)

        I guess another link from the net won't convince you either? http://landley.net/aboriginal/architectures.html#arm
        No, it won't convince me. You can link whatever random posts on the net you want, the ARM Architecture Manual clearly describes an incompatibility between ARMv5 and ARMv7+ architectures.

        Or maybe an example, do you have a RasPi?
        I've got the Pi3 (v8 with 64 bit) and I can install a ARMv6 compiled Raspbian, Michael did a test here maybe 2 years ago so you don't have to take my word for it.

        Edit: Sorry the test was v7 vs 64 bit https://www.phoronix.com/review/raspberrypi-32bit-64bit but if you have a RasPi >= 3 or know someone you can download the old raspbian vor v6 and run it.
        I didn't say v6 is not compatible with v7. You were claiming ARMv8 is compatible with ARMv5 and I said that's wrong. And it is wrong.

        You're right, so my claim that Apples M1/2 is not standard ARM v8 will only hold true if they didn't implement A32 and T32 at EL0. Some guys on the net say no: https://news.ycombinator.com/item?id=27277351 but i can't find a definitive answer on Apples dev-sites.
        I won't dispute your argument that Apple M1/2 are not fully ARMv8 compliant as I'm not sure of two things: AArch32 is not available at EL0 (I don't think it's the case but I might be wrong), and I'm not sure Arm disallows the removal of AArch32 from ARMv8.

        Comment


        • Originally posted by qarium View Post
          dude you earned your fool status the hard way.
          credit is given to the people who earns it...
          I would refrain from using names on forums. Not only can you get banned but it makes you look like you don't have a proper response. It also makes me look bad when you look bad, commenting me.
          no i am 100% sure i do not want any mobile device like smartphone or tabled or notebook with a fan.
          i did see so many devices go down because of dust alone no thank you.
          When a device has dust it essentially has no fan. CPU's have been able to lower their speeds to keep themselves from burning up and destroying themselves for over a decade. The difference here is that Apple willing doesn't include a fan, and causes throttling. If dust is bad because heat is bad, then no fan is stupid. Also these aren't mobile phones or tablets.
          so really no thank you.-.. i do not want a mobile device with a fan.
          Keep in mind only the Macbook Air models are fanless. The rest like the M1 Pro and Max always come with a fan. Also, lots of laptop manufactures choose to let the fans run slow despite temperature, as to appeal to the consumer. Running 95C+ is not ideal, and the fan should run faster. I've have an old Dell XPS M140. The GPU died because the fans are setup to run slow. This is also when Nvidia put shit solder in their chips and you have to heat flow the GPU to fix the issue. But the fans ran slow regardless of temp, and there is software called i8kutils which is even on Linux to control these fans because Dell is stupid.

          This is why I got big into cooling because too often computers die from heat. Heat is the # 1 killer of computers. It's not always the CPU or GPU either. You don't want anything to get hot enough to burn your finger. Running a CPU to 95C+ is not good, and it doesn't matter if Apple says it's OK. Every manufacturer says it's OK, and it's always wrong. The only difference here is that Apple purposely lets them get hot because Apple cares more about aesthetics than functionality.

          The fact that the M2 Macbook Air slows down means it needed a fan. This was even an issue on the M1 Air's that people found creative ways to make it better.

          dude i do not buy these M1/M2 apple devices and the reason is the bad linux support.

          but apple can always hire people and fix the linux drivers for this hardware.
          You live in some sort of fantasy if you think Apple will ever work on Linux. They have Mac OSX, and if you don't like Mac OSX then they'll fix it to make you like it. That is how Apple do. Apple has never contributed to Linux in any sort of way. You see Apple on this list? I see Intel nearly on top, but no Apple. The reason Linux is being ported is because people want it and it pisses off Apple, just like it pissed off Sony to get to see Linux on the PS4 and we all know how well Sony likes Linux on their consoles. Historically Apple created its software from open-source projects, but Apple's developers rarely contribute much code back. Microsoft contributes more code to Linux than Apple ever has. Apple is a shit company.

          Last edited by Dukenukemx; 12 August 2022, 02:22 AM.

          Comment


          • Originally posted by Dukenukemx View Post
            The reason Linux is being ported is because people want it and it pisses off Apple, just like it pissed off Sony to get to see Linux on the PS4 and we all know how well Sony likes Linux on their consoles. Historically Apple created its software from open-source projects, but Apple's developers rarely contribute much code back. Microsoft contributes more code to Linux than Apple ever has. Apple is a shit company.
            If that really pissed Apple off, don't you think they would have done as Sony did with the PS3? Just lock the bootloader and the machine is tied to signed OS images. That doesn't mean they'll help porting Linux, but I don't think that irritates them.

            Regarding MBA and the lack of fan, a huge proportion of MBA users will never notice. They just browse, listen to music, write mails and so on. As a developer, a fanless Mac is not an option and I got myself an MBP M1; fans almost never turn on even when I compile large codebases. I nonetheless need those fans as I sometimes run heavy SIMD multi-threaded code and that pushes the CPUs enough that active cooling is needed.

            Don't forget, most people here are not average users and what we want is not what the vast majority of users want

            Comment


            • Originally posted by arQon View Post

              Then I phrased it poorly, sorry. What I was saying is that the vast majority of Mn laptops are not going to developers, they're going to users for whom a low-end laptop (not Chromebook-tier, but certainly not a 5950 with a 3080 and 64GB) would be exactly as viable other than in at best battery life, and that they're being bought not because their users have any need for that level of portable power, but because they're middle managers who "have to" have better equipment than the peons, that sort of thing. That is, that for the majority they're status symbols / rewards / etc first, and pieces of "necessary" technology either second or not at all.

              IOW, you may be one of the exceptions, but you *are* the exception, not the common case.
              I disagree.

              With Macbook Air you are right, with Macbook Pro you are dead wrong, that entire demographic is mainly creatives/programmers.

              Like if you go to a facebook/google campus/office you will see like 80%+ of programmers have a mac. Even if you look at projects like home-brew (macos specific package manager) and see the ridiculous number of installs if has this is quite obvious

              Comment


              • Originally posted by ldesnogu View Post
                If that really pissed Apple off, don't you think they would have done as Sony did with the PS3? Just lock the bootloader and the machine is tied to signed OS images. That doesn't mean they'll help porting Linux, but I don't think that irritates them.

                Regarding MBA and the lack of fan, a huge proportion of MBA users will never notice. They just browse, listen to music, write mails and so on. As a developer, a fanless Mac is not an option and I got myself an MBP M1; fans almost never turn on even when I compile large codebases. I nonetheless need those fans as I sometimes run heavy SIMD multi-threaded code and that pushes the CPUs enough that active cooling is needed.

                Don't forget, most people here are not average users and what we want is not what the vast majority of users want
                Yeah this is quite hilarious considering that Apple intentionally left the bootloader for the new Macs completely open.

                Source: Asaha Linux, the guys porting Linux to M1/M2.

                I also have an M1 pro 14" and fans only go on when stressing CPU for around an hour. Also a developer

                Comment


                • Originally posted by mdedetrich View Post
                  I also have an M1 pro 14" and fans only go on when stressing CPU for around an hour. Also a developer
                  The SIMD code I ran made them turn on quickly (within one or two minutes IIRC). Turning on the 4 Neon units on the high perf cores is stressing the beast! That being said I never had any throttling, the fans do their job. In fact I find that machine too cold to put it on my lap.

                  Comment


                  • Originally posted by ldesnogu View Post
                    I won't dispute your argument that Apple M1/2 are not fully ARMv8 compliant as I'm not sure of two things: AArch32 is not available at EL0 (I don't think it's the case but I might be wrong), and I'm not sure Arm disallows the removal of AArch32 from ARMv8.
                    AArch32 on ARMv8 in practice is definitely optional (and by practice I mean outside of Apple's chips). Remember we are talking about an ISA that historically was used in a cut through IOT/embedded industry where its completely normal to save 50cents per chip and putting unnecessary silicon on a chip when most users of ARM compile from source anyways.

                    I haven't read the developer docs in question but in summary its definitely optional, not only from actual reality of how the chips are built but also the fact that pretty much all compilers treat it as optional but most critically its nothing specific to Apple's ARM chips.

                    Comment


                    • Originally posted by ldesnogu View Post
                      The SIMD code I ran made them turn on quickly (within one or two minutes IIRC). Turning on the 4 Neon units on the high perf cores is stressing the beast! That being said I never had any throttling, the fans do their job. In fact I find that machine too cold to put it on my lap.
                      In my case they turned on when running JDK 8 under rosetta (couldn't get a native ARM version of JDK 8 working on mac)

                      Comment

                      Working...
                      X