Announcement

Collapse
No announcement yet.

Apple M2 vs. AMD Rembrandt vs. Intel Alder Lake Linux Benchmarks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by qarium View Post
    simple the amd 5nm cpus will have 6nm IO die
    Sure? I read that Phoenix is still monolitic.

    i say this: the nm node does not matter at all all what does matter are the products out in the hand of the people.
    For the customer buying it sure, if you wanna compare efficiency between architectures it matters much.

    Comment


    • Originally posted by Dukenukemx View Post
      AMD's mistake was the Bulldozer architecture where their IPC was crap compared to Sandy Bridge. Sandy Bridge was such a huge leap in performance that it was basically 50% faster than their previous generation, including AMD's Bulldozer. Also why for nearly 10 years x86 has seen very little in improvements, because AMD can't just go out and make a new CPU architecture overnight to replace Bulldozer, and Intel had no reason to engineer a new faster better CPU. Took AMD until 2017 to release Ryzen which was the performance equivalent of Haswell. Intel was convincing the world that 14nm is good enough for everyone.
      believe it or not but bulldozer was the last CPU with 2D node design with a futuristic 34nm SOI node.

      at the same time intel was at 45nm.... yeah you will shout liar at me i know it was 45nm in tranistor size but because it was a 3D design the density was like 28nm later the same node was 22nm

      AMD 34nm SOI had smaller transistors and less power consumption per transistor and also had more potential for the clock speed.

      intel used its monopole to spotage AMD with """4FMA""" by making a incompatible 3FMA and also intel did sapotage SSE4.0 by making no cpu with SSE4.0 instead they did make incompatible cpus with SSE4.1

      also they used the 3D node to lower the price of the cpus means bigger size in tranistor count for less money.

      technically if you used 4FMA on a bulldozer AMD cpus where much much faster than intel...

      but intel cheated the way out of this situation 4FMA vs inferior 3FMA and also SSE4.0 vs inferior SSE4.1 ...

      also low core counts favored intels hyperthreading high core counts like 16-32 cores would had favored AMDs bulldozer architecture... but AMD lost do fast that they never did make bigger chip dies than 8cores.

      this means in conclusion: intel was not better they just did cheat their way out of this situation.
      Phantom circuit Sequence Reducer Dyslexia

      Comment


      • Originally posted by mdedetrich View Post
        Your argument doesn't make any sense then, you were saying before that Apple M1/M2 isn't for "real work" even though (at least in the laptop space) its better in real work than its competitors. There are of course some exceptions but you are giving the impression that the Apple M1/M2 is like a chromebook or something.
        Then I phrased it poorly, sorry. What I was saying is that the vast majority of Mn laptops are not going to developers, they're going to users for whom a low-end laptop (not Chromebook-tier, but certainly not a 5950 with a 3080 and 64GB) would be exactly as viable other than in at best battery life, and that they're being bought not because their users have any need for that level of portable power, but because they're middle managers who "have to" have better equipment than the peons, that sort of thing. That is, that for the majority they're status symbols / rewards / etc first, and pieces of "necessary" technology either second or not at all.

        IOW, you may be one of the exceptions, but you *are* the exception, not the common case.

        Comment


        • Originally posted by qarium View Post

          you will not be able to compare amd 5nm chips with the apple soc.... why? simple the amd 5nm cpus will have 6nm IO die... and then again you can claim it is not ARM ISA instead it is the 6nm node of the IO die.
          The IO die generally doesn't need to be made with the best manufacturing since it isn't very demanding. The main reason AMD does this is to cut costs. The cores do need it, and by making the cores 5nm it allows AMD to produce many chiplet cores and avoid defects that may occur. Basically AMD's chiplet design is mostly to save money. It can result in better performance since AMD can pick the best ones and send them to servers or Threadrippers.
          i say this: the nm node does not matter at all all what does matter are the products out in the hand of the people.
          We'll find out later this year.

          Comment


          • Originally posted by Dukenukemx View Post
            Just cause it doesn't have a fan doesn't mean it won't collect dust. Just getting the heatsink hot is enough to create circulation. Yes no fan means less dust over time, but it also means you're heat cycling the hardware more often which means it'll fail sooner than the fan equip Macbooks. This is Apple we're talking about, where the earlier model M1's were writing frequently to the SSD and wearing it out, and where USB-C PD hubs will kill them. You know why I know Apple products are bad? Because I repair them. The same reason why a mechanic will tell you to avoid BMW's.
            you are not the only one in the world who do repear and support people in computers.
            i had a company 8 years in this field and the NR1 problem all the people faced where in fact Dust or even worst tobacco smoke+dust.

            some of them did by pure luck buy laptops from professional line they where the only one who successfully could fix this problem with 1 single tool open up clean the fins of the cooler and close it again done.

            most of them did buy non-professional laptops who did in fact sapotage to clean the cooler.
            the dust people could regenerate the cooler with air pressure but this did only work 1-2 times because the dust was blown into corners who from there never did go out of the laptop.

            the tobacco+dust people where not so lucky air pressure does not help at all and only liquid electronic cleaner helps... but this comes with high risk i know people who lost the device by this plain and simple because the liquid did go into the monitor and then the dirt was there and no one could remove it.

            so believe it or not but Dust and tobacco smoke+dust is the NR1 reason why people lose their device.

            this means if i would buy a laptop i of couse would choose the passive cooled version. because it does in fact remove the NR1 problem in laptops.

            of course this is only laptop problem because my PC is easy to clearn means for PC there is no dust probem.

            my smartphone is also passiv cooled and this device never has any dust problem.

            you claim you can not fix apple notebooks but apple removed the NR1 problem why people lose their device over time: Dust from the active cooled airflow...

            why do you want to fix stuff if apple build stuff you do not NEED to fix?

            Originally posted by Dukenukemx View Post
            They disabled it because in 120fps mode so does the PS5. Gotta pay attention.
            God, 16GB of ram is not 16GB of textures. There's the OS, background programs, and of course the game itself. The 6GB on the 1060 is just for textures.
            Please pay attention as Gamers Nexus even showed you that they matched the PS5 features and even showed it. Don't tell me it isn't the same when he clearly showed it was.
            of course we know what operating systems need from the RAM Fedora 36 is at 1,5gb ram and windows 11 is at 2,5gb ram. the PS5 OS use less ram but thats does not matter.
            most game engines and game use 1-2 GB for the game.
            this means you have 16-1,5-2= 12,5GB for textures...
            your 6GB 1060 is at 6GB... this means the visual result is not the same we all know how game look like if the vram is not enough you walk into new area helper textures show up until the right textures are loaded.

            "pay attention"

            of course i do pay attention people play this game with raytracing. and i told you most games waut until AMD FSR1.0 and FSR2.0 is ready. in this game they use raytracing in cutscenes to avoid the massive FPS drop in gaming. but in future with FSR2.0 games will include raytracing with FSR2.0 to make gaming possible.

            if you want to stay at outdated technology only because you want save money sure you can do it.

            but blame other to do the same is: insanity...


            Originally posted by Dukenukemx View Post
            Firstly, you pointed out my systems age without thinking about your systems age, because you didn't have foresight. That's a common theme with you. Secondly, yes your 1920x is faster than my 2700x in multithreaded work loads. My 2700x with higher IPC and clock speeds will be faster in games because games generally only care about IPC. You painted yourself into this corner and you ain't coming out. Stop ad hominem'ing people because you don't have a good argument.
            I am sure my system do age much better than your system.
            you claim games only want IPC and clock speeds but the near past with Metal/Vulkan show that game engines over the last 10 years did learn how to perform multithread...
            This means future games will have much better multithread support and my CPU will shine and you go and buy new CPU because the game needs and demands more cores.
            I also told you i do not only do game on my pc.
            i am 100% sure that my system threadripper 1920x+vega64+128gb ECC ram do age much better for my needs than your system.
            If i remember correctly you can not even upgrade your AM4 system to a never class of AMD cpus because your 3xx chipset is outdated and all the modern cpus from 3950X to 5950X need 5xx or 4xx chipset.

            I will uprade my system to a 2970WX or 2990WX... now you say bad cpu for gaming but as i told you future games will do massive multithread... means bad for old games but good for future games.


            Originally posted by Dukenukemx View Post
            You mined on 6 Vega64 cards and you complained about the price of electricity in Germany? No wonder your rig costed 8000€.
            it did cost 8000€ because it had 6 vega64... and it was not 1 rig it was 2 rigs means 4000€ per rig...

            also about price of electricity in germany this is only a problem if you are a consumer not if you have 2 powerplants for your own need means i produce my own energy.

            Originally posted by Dukenukemx View Post
            Good for you. I'll get the better frame rate in games as my system was built for gaming, because core count isn't everything.
            DirectX11 was singlecore technology... DX12 and vulkan learned how to perform multithread...

            your rig is only faster in games from the past and maybe games from today(thats not sure)
            but my rig is future proof because future games will learn how to perform better at multicore.

            I know games for example arma3 if you ad 5000 'AI players it will consume a 32core cpu on all threads ...
            and arma3 is not even directX12 it is dx11 game... but the AI of the NPC consume all the cores you have.

            also according to userbenchmark your 2700 is only 7% faster in singlethread... means games only need minimal optimisations on multicore and then my system is faster.

            +47% in multicore is better than +7% in single core because you can do software optimisation on multicore.


            Phantom circuit Sequence Reducer Dyslexia

            Comment


            • Originally posted by qarium View Post

              this is plain and simple wrong. modern nodes do not do the transistors smaller instead they build 3D structures to put more transistors on the same 2D area means the density goes up without the transistors go smaller.
              That sounds like 3D stacking.
              if you want classic 2D tranistor node you maybe get some 28nm or 22nm or the very very best is 14nm or if you are lucky 12nm...

              intels 10nm node is so bad because intel startet the 3D stuff long time ago it was AMD bulldozer on 34nm node who was 2D tranistors... and at the same time intel only had 45nm in 2D speaking but in 3D world it had the density of a 28/22nm node

              it was from this time one over 10 years ago all hightech nodes DID NOT MAKE THE TRANISTORS SMALLER instead they did build 3D structures.

              now lets the the most extrem variant to know today in 2022... it is the IBM 2nm node... if you put it into a microscope no element is smaller than 6-7nm... if you measure it with classic 2D logic...
              so how they get 2nm out of 7nm structures ? its simple it has three 3D layers... every tranistor has 3 tranistors stagged over each other..

              this 3D stagged tranistor stuff is very complicated and very expensive thats why projects like libre-soc for example more likly aim for classic 2D nodes on 12nm/14nm/22nm/28nm....

              these 3D stacked high density nodes have problem with Dark Silicon...
              https://en.wikipedia.org/wiki/Dark_silicon

              in the classic 2D design world there where chips without any Dark Silicon...

              but the more 3D the nodes go the more Dark Silicon is there means you can not fire up all tranistors at the same time in the same area because then it would destroy the chip because of to high electrical electron flow ""Ampere'""

              because of this Dark Silicon problem is is very expensive to develop for these 3D high-tech nodes like 2nm or 3nm or 4nm or 5nm...

              and we will see many projects like libre-soc stay on these outdated 2D nodes because they avoid this cost explosion desaster of this Dark Silicon problem...
              I'm not gonna pretend I fully understand what goes on in chip manufacturing but this is the first I've ever heard this. As far as I know a lower nm means a smaller transistor. Though there's many ways to measure the size of the transistor, and it also doesn't mean that goes from 14nm to 7nm means you get a 50% smaller chip. Going by TSMC they said 3nm process can reduce power consumption by up to 45%, improve performance by 23% and reduce area by 16% compared to 5nm. They said this, which is probably why Apple and Intel bought a ton of 3nm for the next couple of years. Though now Intel may drop 3nm due to delays. It seems TSMC's 3nm is yet again delayed, and Intel can't afford to wait. The article did say that Meteor Lake without 3nm would be slower, but this also means no Apple Silicon running on 3nm next year. Intel has decided to go with TSMC's 5nm instead. This might also explained why AMD had no plans to use 3nm next year, but instead stuck with 5nm and 4nm.



              Comment


              • Originally posted by Dukenukemx View Post
                Careful, your ad hominum is showing.
                dude you earned your fool status the hard way.
                credit is given to the people who earns it...

                Originally posted by Dukenukemx View Post
                It lost in most cases to the Ryzen 7 6850U, and sometimes to Intel. In some cases very badly. So I'm sure you want it with a fan. You know you could attribute it to the fact that this is a Linux distro in early stages.
                no i am 100% sure i do not want any mobile device like smartphone or tabled or notebook with a fan.
                i did see so many devices go down because of dust alone no thank you.
                with my pc i do not have a dust problem it is easy to clean.

                i had a company in this field for 8 years and the #NR1 defect of all notebooks was only because of dust and tobacco smoke+dust.

                so really no thank you.-.. i do not want a mobile device with a fan.

                Originally posted by Dukenukemx View Post
                By the time Linux on M1/M2 gets 3D acceleration working, AMD and Intel would have faster and more efficient CPU's out.
                Linux guys sure but not the majority of people. We have trouble getting x86 guys onto Linux.
                dude i do not buy these M1/M2 apple devices and the reason is the bad linux support.

                but apple can always hire people and fix the linux drivers for this hardware.
                Phantom circuit Sequence Reducer Dyslexia

                Comment


                • Originally posted by Anux View Post
                  Sure? I read that Phoenix is still monolitic.
                  For the customer buying it sure, if you wanna compare efficiency between architectures it matters much.
                  cpus like the 5950X will have two 5nm chips and one 6nm IO die the GPU will also be in the IO die.
                  amd will bring this 5nm desktop like chips also to notebooks/laptops.-

                  for notebooks there will also be monolitic versions. but then you maybe only get 8cores and the 16core version will have two 5nm and one 6nm chip
                  Phantom circuit Sequence Reducer Dyslexia

                  Comment


                  • Originally posted by Dukenukemx View Post
                    The IO die generally doesn't need to be made with the best manufacturing since it isn't very demanding.
                    and again WRONG... this IO die is very demanding and why? simple the GPU is inside of the IO die.
                    thats why they use 6nm instead if 14/12nm ...

                    you think in the old days of the 3950X and 5950X there was no GPU inside of the IO die...

                    Originally posted by Dukenukemx View Post
                    The main reason AMD does this is to cut costs.
                    sure but maybe this is not true because the analog part of the IO chip works better in bigger size nodes...

                    Originally posted by Dukenukemx View Post
                    The cores do need it, and by making the cores 5nm it allows AMD to produce many chiplet cores and avoid defects that may occur. Basically AMD's chiplet design is mostly to save money. It can result in better performance since AMD can pick the best ones and send them to servers or Threadrippers.
                    We'll find out later this year.
                    modern EUV-high-NA nodes have a limitation of how big a single chiplet can be. in the past these limits where much bigger. means sometimes maybe it is not to save money it could be the case that they just have limiations in how big a single chip could be because the ASML EUV-high-NA chip printer is limited how big a chip it prints can be. EUV has the same problem ...
                    this limitation forces you do do chiplet design you can not choose to put 64cores and cache and all in one cpu die...

                    Phantom circuit Sequence Reducer Dyslexia

                    Comment


                    • Originally posted by qarium View Post
                      the tobacco+dust people where not so lucky air pressure does not help at all and only liquid electronic cleaner helps... but this comes with high risk i know people who lost the device by this plain and simple because the liquid did go into the monitor and then the dirt was there and no one could remove it.

                      so believe it or not but Dust and tobacco smoke+dust is the NR1 reason why people lose their device.
                      Did you know Apple will void warranty if you smoke? I don't blame them because smokers dust is awful to clean. I have to literally wash the motherboard with soap and water. You can't blow it clean.
                      my smartphone is also passiv cooled and this device never has any dust problem.
                      You think so but that's until you actually run a game on them.

                      you claim you can not fix apple notebooks but apple removed the NR1 problem why people lose their device over time: Dust from the active cooled airflow...

                      why do you want to fix stuff if apple build stuff you do not NEED to fix?
                      I didn't say I can't fix Apple products, and there's a lot of stupid things that goes wrong in Apple products. I don't do board repairs like Louis Rossmann, though I will do it if I knew which component failed. Most of the time it's a broke screen or a bad battery, but there are times when I gotta ask, "WTF Apple"? Usually resulting in me having to find a new motherboard which usually ends up costing so much the customer doesn't want to fix it. No liquid damage, just blown motherboard. This is why I watch a lot of Louis Rossmann videos so I can try and diagnose a bad board and fix it.

                      That's not to say that Apple is the only bad manufacturer. HP/Compaq laptops had a number of bad solder joints in the early 2010's just like the Xbox 360. Lenovo recently had a slew of problems that resulted in dead motherboards in their laptops. My guess would be a lack of cooling as the VRMs on the boards have a thin plate of aluminum to draw heat.
                      of course we know what operating systems need from the RAM Fedora 36 is at 1,5gb ram and windows 11 is at 2,5gb ram. the PS5 OS use less ram but thats does not matter.
                      most game engines and game use 1-2 GB for the game.
                      What games have you been playing? Cyberpunk 2077 eats some serious ram. That depends on the game but some games don't while some do. Elden Ring just loading the game up will use 4GB, though it does recommend 12GB of system memory.
                      this means you have 16-1,5-2= 12,5GB for textures...
                      The PS4 OS reserved 2.5GB of ram, so it's probably safe to assume the PS5 will also reserve at least 2.5GB for the OS. Also, the PS5 uses 1GB of ram for 4k texutres just for the UI. Also also, it reserves another 1GB of ram for apps. Whatever is left over for game textures isn't 12.5GB. Assuming 16GB minus 4GB of OS+reserves, results in 12GB for the game. If you were playing Elden Ring then 4GB is used for the game engine, resulting in 8GB for textures.

                      The Xbox Series X which is arguably more powerful has 10GB of RAM at high speed and 3.5GB at standard speed. Which means most likely 10GB is reserved for textures, and also Microsoft does a better job with relieving their ram than Sony.

                      your 6GB 1060 is at 6GB... this means the visual result is not the same we all know how game look like if the vram is not enough you walk into new area helper textures show up until the right textures are loaded.
                      Also remember that the CPU and GPU have to fight over that ram and bandwidth, which means it's even slower. This is also the same problem with Apple's Silicon, and probably why it isn't at RTX 3090 levels of performance despite the insane amount of bandwidth.
                      if you want to stay at outdated technology only because you want save money sure you can do it.
                      You have the same GPU as me, but with 8 more streaming cores. Also, what makes you think I'm using a Vega 56 forever? I'm waiting for the GPU market to crash to pickup a 6600XT.
                      I am sure my system do age much better than your system.
                      No it won't. The GPU is faster but that won't make a difference in gaming. The CPU you have is first gen Ryzen while I have second gen or known as Zen1+. How's your ram speed? I have 3000Ghz ram that I tweaked the memory settings in the bios. I also have a Ryzen 7 1700 which is first gen and getting 3000Mhz ram to work is difficult. Much worse if you have 4 sticks of ram. The 2700X has only 2 sticks, which makes reaching 3000Ghz easy. The 1700 I have is overlocked to 3.8Ghz, but the ram is also 2600Ghz. I also put a huge copper heatsink on the VRMs to keep them extra cool.


                      you claim games only want IPC and clock speeds but the near past with Metal/Vulkan show that game engines over the last 10 years did learn how to perform multithread...
                      This means future games will have much better multithread support and my CPU will shine and you go and buy new CPU because the game needs and demands more cores.
                      Games now use as much as 6 cores, and even then that's not super needed. Most games will run fine on 4 cores. My 8 core is overkill. Your 12 core is super overkill.
                      i am 100% sure that my system threadripper 1920x+vega64+128gb ECC ram do age much better for my needs than your system.
                      You're using ECC which is slower. Good for servers, not so much for gaming. Also I doubt you have 2 sticks of 64GB ECC ram. What clock speed do you even run the ram at?
                      If i remember correctly you can not even upgrade your AM4 system to a never class of AMD cpus because your 3xx chipset is outdated and all the modern cpus from 3950X to 5950X need 5xx or 4xx chipset.
                      Who cares, my motherboard was only $50 remember? B550 motherboards are cheap and plentiful, but I have no intention to upgrade anytime soon. ASRock B550 Phantom Gaming for $95 and that's with current pricing. Besides, AMD is gonna release new CPU's which means AM5 is the future. Patience is a virtue.
                      DirectX11 was singlecore technology... DX12 and vulkan learned how to perform multithread...
                      Doesn't matter to me since I use Linux Mint. Which means my games run on top of Vulkan through DXVK.
                      Last edited by Dukenukemx; 12 August 2022, 12:14 AM.

                      Comment

                      Working...
                      X