Announcement

Collapse
No announcement yet.

Apple Announces The M2 Ultra SoC - 24 Core CPU, Up To 192GB Unified Memory

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by mrg666 View Post
    You are right after all. I think people are reading your comments, Apple stock took a nose dive today and Linus Torvalds is replacing his M2 Macbook with X86. Apologies!

    You are providing your astute insights for free to everyone here while you can make money on that. I appreciate it but I am hoping you are at least shorting some Apple stock.
    Wow you got it. Absolutely brilliant deduction, as expected from the world class performance expert.

    Everyone knows releasing a Linux kernel requires the utmost performance that not even supercomputer can tackle.

    You must have over 9000 IQ. Impressive.

    Comment


    • Originally posted by drakonas777 View Post
      Even against 1 or 2 generations older CPUs it did not win universally. For example, in software based en/decoding and rendering M1/M2 demonstrated about the same efficiency ZEN3 has. Considering ZEN3 was older and used inferior node, this particular workload was really pretty cringe for Apple.

      Another semi-cringe thing is M1/2 ISA level ir around 8.5/8.6, which was kind of dated even during the launch.

      Of course, all of this is not that important for the end user when SoC has many accelerator and Apple ecosystem support them quite well, but still.
      Exactly. And you're right, using accelerators makes no sense as comparison of architectures since they're not using that (CPU) arch in the first place. It would be like comparing discrete GPU tasks and claiming the CPU arch is the one doing it, etc.

      Anyway I'm glad to see we have some sane people on this board.

      Comment


      • Originally posted by Weasel View Post
        Wow you got it. Absolutely brilliant deduction, as expected from the world class performance expert.

        Everyone knows releasing a Linux kernel requires the utmost performance that not even supercomputer can tackle.

        You must have over 9000 IQ. Impressive.
        Ummm, your deduction is improving. Work on! You will get it.

        Comment


        • Originally posted by Weasel View Post
          Exactly. And you're right, using accelerators makes no sense as comparison of architectures since they're not using that (CPU) arch in the first place. It would be like comparing discrete GPU tasks and claiming the CPU arch is the one doing it, etc.
          Anyway I'm glad to see we have some sane people on this board.
          if you have 1 cpu with only general purpose cpu cores.... and another cpu with ASICs and FPGAs to do everything and their general purpose cpu cores most of the time do nothing....

          honestly for a notebook/laptop or any kind of mobile device run on battery the second cpu would be much better because the power consumtion is much less with ASICs and FPGAs than to perform the same task with general purpose cpu cores.
          you are free to only compare general compute cores of an architecture with only general compute cores of another architecture...
          but then you will find out that most people do not care at all about this. because it has zero relevance in the practical use of the device.

          Phantom circuit Sequence Reducer Dyslexia

          Comment


          • Originally posted by qarium View Post

            Diese Seite beschäftigt sich mit der Leistungsaufnahme aller Prozessoren im Test. Mithilfe eines Hardware-Tools messen wir die Leistungsaufnahme der CPU allein, mit Einflussnahme der Spannungswandler auf dem Mainboard. Dazu geben wir die "rohen Watt" im Schnitt als Fps pro Watt an. Es folgt schließlich der Effizienz-Index.


            see here the 7800X3D is at only 54 watt power consumtion and this at the maximum performance its the fasted cpu in the test
            If I recall correctly, the highest end M2 configuration draws only 40 watts at peak load, and that is for the entire system, including the GPU and storage. AMD would need to cut the power consumption by a factor of 3 to have a chance of touching the M2.

            Comment


            • Originally posted by Weasel View Post
              Exactly. And you're right, using accelerators makes no sense as comparison of architectures since they're not using that (CPU) arch in the first place. It would be like comparing discrete GPU tasks and claiming the CPU arch is the one doing it, etc.

              Anyway I'm glad to see we have some sane people on this board.
              Transcoding performance is a fairly useless benchmark since less than 1% of computer users ever do it and when they do, it is better done with hardware encode/decode or a GPGPU routine. It is so much unlike what typically runs on CPUs that a comparison in that area is useless for determining the fitness of a CPU for people.

              The comparison is certainly useful for a niche, but the only reason to mention it here is to cherry pick to express a bias. However, that failed spectacularly. Anyone with even the slightest experience doing SIMD programming knows that getting equal performance despite a half width handicap is amazing. The far lower power the M2 needs to do it means the results are favorable to the M2, rather than the other way around.
              Last edited by ryao; 08 June 2023, 01:53 PM.

              Comment


              • Originally posted by ryao View Post
                If I recall correctly, the highest end M2 configuration draws only 40 watts at peak load, and that is for the entire system, including the GPU and storage. AMD would need to cut the power consumption by a factor of 3 to have a chance of touching the M2.
                you are out if touch with reality because no one cares if the total power consumtion of apple m2 is 40 watt or not...
                if the apple m2 is also slower in the game it could end up with less FPS per Watt...
                no one cares because no one buy a apple m2 for gaming.

                AMD could easily downclock it to have lower power consumtion and could sell it as a notebook/laptop cpu but maybe will not do it.
                the gamers really do not care if the apple m2 only consumes 40 watt if the game run slower. thats not a win for the gamer.

                Phantom circuit Sequence Reducer Dyslexia

                Comment


                • Originally posted by qarium View Post

                  you are out if touch with reality because no one cares if the total power consumtion of apple m2 is 40 watt or not...
                  if the apple m2 is also slower in the game it could end up with less FPS per Watt...
                  no one cares because no one buy a apple m2 for gaming.

                  AMD could easily downclock it to have lower power consumtion and could sell it as a notebook/laptop cpu but maybe will not do it.
                  the gamers really do not care if the apple m2 only consumes 40 watt if the game run slower. thats not a win for the gamer.
                  The one here that is out of touch is you considering that you think 53W power draw by a CPU is low. That is high. The 40W used by an entire high end M2 machine at the wall outlet underlines just how high 53W is.

                  That said, the creator of PCGamingWiki uses Apple hardware to play games. He has a YouTube channel dedicated to it:

                  Mac gamer and founder of PCGamingWiki. This is my main channel which now mostly focuses on Mac and handheld gaming. If you'd like to see more mini tutorials, reviews and life hacks please check out my second channel! Sponsorship enquiries please contact: [email protected]


                  Anyway, benchmark comparisons between the M2 and Zen 3+ show that the M2 is a more power efficient design that has stronger performance. Also, they already optimized things for mobile with Zen 3+ and Apple still had an advantage. Both Apple and AMD are working on newer designs, and AMD does not seem likely to close the gap.
                  Last edited by ryao; 08 June 2023, 06:00 PM.

                  Comment


                  • Originally posted by qarium View Post

                    OpenBenchmarking.org, Phoronix Test Suite, Linux benchmarking, automated benchmarking, benchmarking results, benchmarking repository, open source benchmarking, benchmarking test profiles

                    i do not compare the 7950X3D with the 13900K because if you watch this list now the 7950X3D is faster because of this i compare it to the 13900KS


                    AMD 7950X3D = 699€

                    intel 13900KS = 779€

                    Here in germany the AMD 7950X3D is 80€ cheaper than the intel CPU. also your Power Bill will be higher this means the higher. this means your argument is not a argument at all. on the long run because of the electric power bill you pay more on the intel side what makes you a foolish liar.

                    missing an iGPU ? well on the ryzen 7000 series all the CPUs do in fact have a iGPU inside of the CPU...
                    on ryzen 1000 or 2000 or 3000 or 5000 there was no iGPU thats right but with ryzen 7000 all the chips have iGPU...
                    thats not even a lie from you because you just did not know it.



                    thats true. but amd did make clear that all future generations will have open-source bios/UEFI...



                    your iGPU arguemnt again all the ryzen 7000 CPUs do have in fact a iGPU...

                    also your "The price difference" argument is bullshit because AMD is cheaper.
                    intel is only cheaper if you only go with the 13900K price= 618€ but then you have a slower cpu.
                    you always get a cheaper cpu if you accept that it is slower.

                    "You also clearly misunderstand what I have stated about the 9Ghz being unstable and will take years for a i9 to be stable at that"

                    you really do not get the point because GHZ does not matter at all. the 7950X3D has much lower clock speed but is the faster cpu.

                    "It isn't stupid to have an older AMD processor like you have it is stupid to say it is faster when it is half or less the speed."

                    on multicore benchmarks like compiling it beats the shit out of the intel cpu...

                    you just talk comlete bullshit you claim your cpu is cheaper ok lets calculate 699€-618€= 81€ but then you say use Liquid Nitrogen then it is no more 81€ price advantage on intel side then the intel you costs you millions more.

                    I will admit I was wrong about the iGPU and only that. You clearly don't understand how complex performance actually is and how insecure AMD processors are. Ryzen is basically a marketing gimmick where they just put two processors together and threw cache at it until they could even compete after the disaster of FX. AMD Epyc is actually better but still less secure if you care about using your computer online. You are basically using a processor without a disable execution bit just to defend AMD.

                    Last edited by mitchellrenouf; 08 June 2023, 08:34 PM.

                    Comment


                    • Originally posted by ryao View Post
                      The one here that is out of touch is you considering that you think 53W power draw by a CPU is low. That is high.
                      no it is low because it is the fasted cpu for gaming. and the 13900K consumes 146watt for similar performance.
                      this means the FPS per Watt is high. and no one cared if the apple M2 only consumes 40watt if the FPS per Watt is lower and the total performance is also lower.

                      before the 7800X3D emerged the people did buy a intel 13900KS just to make sure they have the fasted FPS possible in their games and the 150+watt consumtion was irrelevent to them.

                      Originally posted by ryao View Post
                      The 40W used by an entire high end M2 machine at the wall outlet underlines just how high 53W is.
                      That said, the creator of PCGamingWiki uses Apple hardware to play games. He has a YouTube channel dedicated to it:
                      Mac gamer and founder of PCGamingWiki. This is my main channel which now mostly focuses on Mac and handheld gaming. If you'd like to see more mini tutorials, reviews and life hacks please check out my second channel! Sponsorship enquiries please contact: [email protected]
                      show me your benchmarks with total FPS and FPS per watt or I and any other gamer does not care.
                      53watt is not high my threadripper does ~240watt on Gaming...
                      less power consumtion is irrelevant for gamers to be honest there is maybe a different for "mobile gamers" but thats another story.

                      Originally posted by ryao View Post
                      Anyway, benchmark comparisons between the M2 and Zen 3+ show that the M2 is a more power efficient design that has stronger performance. Also, they already optimized things for mobile with Zen 3+ and Apple still had an advantage. Both Apple and AMD are working on newer designs, and AMD does not seem likely to close the gap.
                      and again you come up with zen3+ non X3D cpu benchmarks i told you this already no one cares because gamers don't buy these cpus. they buy the X3D cpus because they have the best gaming performance and the best power consumtion in this field.

                      there is maybe a big gap outside of gaming but for gaming AMD right now has the best solution.

                      right now AMD does not offer this best solution to "Mobile Gaming"

                      but if we speak about mobile gaming the market buy the valve steam deck with amd apu right now.

                      lets say apple team up with valve and the next steam deck is based on apple silicon then yes this would be interesting.

                      but right now who does buy a apple m2 laptop to do mobile gaming ? just be honest very little people do this.
                      Phantom circuit Sequence Reducer Dyslexia

                      Comment

                      Working...
                      X