Announcement

Collapse
No announcement yet.

Apple M2 vs. AMD Rembrandt vs. Intel Alder Lake Linux Benchmarks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by Dukenukemx View Post
    I keep linking Louis Rossmann who lost his mind how Apple didn't put a fan on the CPU but on the other end of the laptop, essentially making the fan useless. Not an M1 or M2 but an Intel laptop. Apple hasn't been cooling their Intel chips for a while now, and it ended up burning out a number of them.
    The latest generations of Intel Macbooks were trash and everybody knew that. It's also completely irrelevant because M1/2 is *not* an Intel Macbook.

    Originally posted by Dukenukemx View Post
    No it isn't, and none of you pro Apple supporters look at benchmarks. You all take this rhetoric and run away with it without knowing anything. The M1 uses the same amount of power in a multithreaded work load as competitive x86 laptops, but is only more efficient in single threaded work loads. Given 4x more efficient in single threaded but still. If you include the GPU for gaming then it's horrible. People have posted videos here showing this.
    I will link this Techspot article to both you and Anux because it's relevant:
    The new Apple M2 is being touted as a much faster SoC than its competitors, but we'll have to see that for ourselves as we test it...


    The test there shows an M2 MBP which has a fan and the comparison against 6800U is quite revealing. 6800U configured to 15W TDP consumes about as much power as M2 MBP but the performance lags behind. When boosted to 25W TDP, the performance is mostly on par but it draws much more power, peaking at 37 W. It also seems that an actively cooled M2 can keep running at full throttle for all day long; that's something that x86 chips have not been capable of for years unless you put a heatsink from a nuclear reactor on top of them.

    Comment


    • At the end of the day, Intel/AMD won’t have anything to compete with Apple Silicon until 2024. Intel will compete with Lunar Lake. AMD also announced they’ll release ultra low power performance CPUs in 2024 as well. While Qualcomm’s Nuvia chips are due in 2023.

      You can hate on MacBooks all you want but there is no denying the screens are what make them expensive. Someone please show me some other competitor laptop that has 2000+ local dimming zones that’s not sold by Apple. Best I could find was 512 local dimming zones by Asus.

      Comment


      • Originally posted by mdedetrich View Post
        This already exists with TSMC 5nm. We have M1 which uses TSMC 5nnm and AMD'z Zen3+ which is TSMC 5nm which laptops are starting to use.
        AMD desktop chips use 7nm while their new Zen3 based laptop SoC's are 6nm. Soon we'll see AMD's desktop using Zen4 on 5nm. Currently only Apple is on 5nm.
        But why, this isn't going to change anything?
        Because that's how we have made chips faster and more efficient by making the transistors smaller.
        I mean unless you are expecting Intel's node to leapfrog TSMC's (which is not likely for the near future) its going to be the same story. And if this does magically happen, its because of Intel's better node technology and not the ISA.
        Intel is no longer exclusively using their own manufacturing. Intel bought a crap ton of 3nm from TSMC and by next year will be releasing 3nm based chips, just like Apple. In fact Intel has even bought a place in line for IBM's 2nm.
        In fact you can argue that historically the reason why x86 was "better" is not due to the ISA but the fact that Intel just bruteforced performance with far superior node manufacturing compared to competitors
        x86 got faster for many reasons historically. When Intel released the Pentium Pro they introduced out-of-order execution, which is just brand prediction. AMD created x64 which brought x86 to 64-bit. There's a laundry list of improvements, including it no longer being truly CISC based.
        (this is also evidence by the fact that AMD for most of its history had terrible performance with x86 due to its deficiencies with node technology).
        Athlon XP and Athlon 64 were worlds ahead faster than Intel for nearly a decade, and this was because the Pentium for exchanged Instructions per clock cycle for more clocks on the Pentium 4. AMD's mistake was the Bulldozer architecture where their IPC was crap compared to Sandy Bridge. Sandy Bridge was such a huge leap in performance that it was basically 50% faster than their previous generation, including AMD's Bulldozer. Also why for nearly 10 years x86 has seen very little in improvements, because AMD can't just go out and make a new CPU architecture overnight to replace Bulldozer, and Intel had no reason to engineer a new faster better CPU. Took AMD until 2017 to release Ryzen which was the performance equivalent of Haswell. Intel was convincing the world that 14nm is good enough for everyone.
        You can do that right now with Asahi Linux. You can install it on an existing mac and at least if you don't care about GPU performance the CPU side of things is what you would expect from the M1 (in fact in some cases programs running on Asahi Linux M1 are faster than MacOS on the M1). So if anything this is widening the gap, not making it smaller as you are hoping.
        Without GPU acceleration you'll use more battery and lose performance. Try benchmarking video encoding with the GPU and see how well Asahi Linux on the M1 will perform compared to mac OSX or x86/Linux.

        Comment


        • Originally posted by WannaBeOCer View Post
          At the end of the day, Intel/AMD won’t have anything to compete with Apple Silicon until 2024. Intel will compete with Lunar Lake. AMD also announced they’ll release ultra low power performance CPUs in 2024 as well. While Qualcomm’s Nuvia chips are due in 2023.
          Apple's stay of execution has till next year. AMD will release Zen4 on 5nm this year. Next year AMD will release Zen4 on 4nm for laptops, and Intel will release chips on 3nm like Apple.
          You can hate on MacBooks all you want but there is no denying the screens are what make them expensive. Someone please show me some other competitor laptop that has 2000+ local dimming zones that’s not sold by Apple. Best I could find was 512 local dimming zones by Asus.
          I thought it was this reason?

          Comment


          • Originally posted by Dukenukemx View Post
            Same people who complain when the passive cooled products loses and asks to be benchmarked against the active cooled product. You didn't want a fan, you get fanless results.
            you claim this: "I build and repair PC's so I'm gonna know"
            then you know the fanless results will be the same as in 5 years from now...
            at the same time the active cooled product will suck so much """Dust""" that it will be slower in 5 years.
            and dust is the best case scenario because what if the one wo buys this laptop is a smoker ?
            then it is not only dust inside of the active cooled product...

            i do have a active cooled pc but it is easy to open and to clean.
            but on mobile means smartphone or notebook i would buy passive cooled version because of the dust problem.

            why do you want to repair something if it would never go broken i the first place because you choose the passive cooled version.

            Originally posted by Dukenukemx View Post
            Oh yea, hey guys check it out a GTX 1060 6GB is as fast or faster than the PS5, so says tech Jesus.
            Gamers Nexus said they matched the features of the PS5. You would know this if you watched the video. Also Apple doesn't even have Ray-Tracing, so why do you care?
            i did watch the video and gamers nexus did in fact disable "raytracing"
            and also a 6gb vram card can not hold the same amount of textures as a PS5/apple M1/2 they have shared vram they can hold up to 16gb textures...
            as soon as you send this much textures to your GTX1060 the FPS will drop massively.
            you only have higher FPS on this old card because the result on the screen is not the same.

            Originally posted by Dukenukemx View Post
            I have a Vega 56, and you have a Vega 64 so we have the same aged GPU's. You also have a Threadripper 1920x, which I'm sure is even older than my 2700X. At least be self aware before you throw rocks from glass houses.
            I build and repair PC's so I'm gonna know some tricks to building a capable cheap machine. Also just a reminder your Threadripper 1920x came out in 2017 while my 2700x came out in 2018, so you're the one running older hardware.
            you send me a video in how to build a system for 1000dollars who at minimum runs 8 years to save a lot of money.... well my 1920x is maybe older than your 2700x but who cares about years numbers 2017 vs 2018 in the end it is 100% clear that the 1920X is faster than the 2700X...
            it is not about running older or younger hardware it is about running faster hardware.

            Based on 169,517 user benchmarks for the AMD Ryzen 7 2700X and the Ryzen TR 1920X, we rank them both on effective speed and value for money against the best 1,442 CPUs.


            the 1920x is 46% faster in massive multicore see "OC Multi Core Mixed Speed"

            you see Gentoo Linux is happy with my rig...

            Originally posted by Dukenukemx View Post
            It's very simple, you just wait for all the idiots who bought GPU's to mine cypto to sell them after a crash and pick up the hardware for cheap. I did it in 2014 when it crashed the first time, and I did it in 2018 when it crashed the second time. I picked up so many RX 470's for cheap and placed them into rigs for $100. Guess what time it is soon? Crypto market crash time, but it hasn't fully crashed yet. As of now a RTX 3060 can be had off Ebay for a little over $300. Not there yet, not that I would buy an Nvidia, but I use it as a benchmark. An RX 6600 XT is even less money, like sub $300.
            Don't be ashamed for paying 8000€ for that machine if you really needed that much compute power. If you only play games on it then you brought a chain gun to duck hunt.
            I did another variant than you did i did buy the 2 workstations to run it as desktop ans at the same time do mine monero and ethereum with it. then after the crypto crash i did sell 4 cards from the 6 vega64 cards i had from start and now i only run it as desktop without mining.

            be sure the result is the same as you do with the different i did make some money and now i have better hardware to use as a desktop.
            Phantom circuit Sequence Reducer Dyslexia

            Comment


            • Originally posted by drakonas777 View Post

              Oh, and as for "hot and loud" x86 notebooks - for starters factory system/SoC/CPU power settings could be not a idiotic garbage they are now usually, which contributes to the myth that x86 has garbage efficiency. Same goes for desktop parts. In the desktop at least you can control it, while some notebooks does not allow to fine grain the power. For example, I personally would always limit power to the least TDP down value. Performance drops are negligible (on ryzen at least, ha ha), but power savings are phenomenal. I switched my 3700X to 45W now and It's cool and quite, much more than RPI4 sitting next to it in deskpi pro case with active cooler LOL
              Yeah and you can control these power settings for x86 notebooks, it doesn't solve the underlying problem, all you are doing is delaying/messing with turbo/tdp/peak values where you gain one thing at the cost of losing another. If you are implying that you can tweak current x86 laptop CPU's to be as efficient as the M1/M2 you are dead wrong.

              Comment


              • Originally posted by MadCatX View Post
                I will link this Techspot article to both you and Anux because it's relevant:
                The new Apple M2 is being touted as a much faster SoC than its competitors, but we'll have to see that for ourselves as we test it...


                The test there shows an M2 MBP which has a fan and the comparison against 6800U is quite revealing. 6800U configured to 15W TDP consumes about as much power as M2 MBP but the performance lags behind. When boosted to 25W TDP, the performance is mostly on par but it draws much more power, peaking at 37 W. It also seems that an actively cooled M2 can keep running at full throttle for all day long; that's something that x86 chips have not been capable of for years unless you put a heatsink from a nuclear reactor on top of them.
                In 15W mode the 6800U uses less power than the M2. By less I mean 4W less, which isn't huge. In 25W it uses more, by more it's 6W more, which again isn't huge.


                In terms of performance, the 6800U in 15W mode scores a 8264 in Cinebench multithread while the M2 scores a 8706. The 6800U in 25W mode scores 9284.


                In Handbrake the M2 is slower regardless of whatever power mode the 6800U is in.


                When it comes to gaming the M2 consumes the same amount of power as the 6800U. Which in this case is Tomb Raider because this is the only game that apparently runs on the M series.


                The only thing the M2 has going for it is single threaded power consumption which uses almost 1/3 the power of the 6800U. You aren't winning anything in battery unless your work load doesn't have productivity involved, or you do video rendering as The M2's video encoder is very capable. Learn to read benchmarks.

                Comment


                • Originally posted by qarium View Post

                  you claim this: "I build and repair PC's so I'm gonna know"
                  then you know the fanless results will be the same as in 5 years from now...
                  at the same time the active cooled product will suck so much """Dust""" that it will be slower in 5 years.
                  and dust is the best case scenario because what if the one wo buys this laptop is a smoker ?
                  then it is not only dust inside of the active cooled product...

                  i do have a active cooled pc but it is easy to open and to clean.
                  but on mobile means smartphone or notebook i would buy passive cooled version because of the dust problem.

                  why do you want to repair something if it would never go broken i the first place because you choose the passive cooled version.
                  Just cause it doesn't have a fan doesn't mean it won't collect dust. Just getting the heatsink hot is enough to create circulation. Yes no fan means less dust over time, but it also means you're heat cycling the hardware more often which means it'll fail sooner than the fan equip Macbooks. This is Apple we're talking about, where the earlier model M1's were writing frequently to the SSD and wearing it out, and where USB-C PD hubs will kill them. You know why I know Apple products are bad? Because I repair them. The same reason why a mechanic will tell you to avoid BMW's.


                  i did watch the video and gamers nexus did in fact disable "raytracing"
                  They disabled it because in 120fps mode so does the PS5. Gotta pay attention.
                  and also a 6gb vram card can not hold the same amount of textures as a PS5/apple M1/2 they have shared vram they can hold up to 16gb textures...
                  God, 16GB of ram is not 16GB of textures. There's the OS, background programs, and of course the game itself. The 6GB on the 1060 is just for textures.
                  you only have higher FPS on this old card because the result on the screen is not the same.
                  Please pay attention as Gamers Nexus even showed you that they matched the PS5 features and even showed it. Don't tell me it isn't the same when he clearly showed it was.
                  you send me a video in how to build a system for 1000dollars who at minimum runs 8 years to save a lot of money.... well my 1920x is maybe older than your 2700x but who cares about years numbers 2017 vs 2018 in the end it is 100% clear that the 1920X is faster than the 2700X...
                  Firstly, you pointed out my systems age without thinking about your systems age, because you didn't have foresight. That's a common theme with you. Secondly, yes your 1920x is faster than my 2700x in multithreaded work loads. My 2700x with higher IPC and clock speeds will be faster in games because games generally only care about IPC. You painted yourself into this corner and you ain't coming out. Stop ad hominem'ing people because you don't have a good argument.
                  I did another variant than you did i did buy the 2 workstations to run it as desktop ans at the same time do mine monero and ethereum with it. then after the crypto crash i did sell 4 cards from the 6 vega64 cards i had from start and now i only run it as desktop without mining.
                  You mined on 6 Vega64 cards and you complained about the price of electricity in Germany? No wonder your rig costed 8000€.
                  be sure the result is the same as you do with the different i did make some money and now i have better hardware to use as a desktop.
                  Good for you. I'll get the better frame rate in games as my system was built for gaming, because core count isn't everything.

                  Comment


                  • Originally posted by Raka555 View Post
                    I don't care about the benchmarks.
                    Says he doesn't care about benchmarks.
                    What is important to me is that I get more than one day of real world use out of a single charge of my MacBook M1-Pro, while developing and compiling software, at speeds that exceed what a Ryzen 9 5900x can do.
                    Than proceeds to give his own, which is magically many times faster than a Ryzen 9 5900x. No numbers, just feels.
                    What you see in your benchmarks are probably some synthetic torture test that does not resemble the real world usage.
                    And you do?

                    Comment


                    • Originally posted by MadCatX View Post
                      I will link this Techspot article to both you and Anux because it's relevant:
                      https://www.techspot.com/review/2499-apple-m2/
                      That's a good test they even have YouTube watching runtime in it. But the power measurements are only taken for Cinnebench, or did I overlook something?
                      Keep in mind it's another Laptop and OS than what was tested here.

                      6800U configured to 15W TDP consumes about as much power as M2 MBP but the performance lags behind. When boosted to 25W TDP, the performance is mostly on par but it draws much more power, peaking at 37 W.
                      Are you comparing the peak power or what do I get wrong?

                      Comment

                      Working...
                      X