Announcement

Collapse
No announcement yet.

AMD Announces Ryzen 7000 Series "Zen 4" Desktop CPUs - Linux Benchmarks To Come

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #91
    Originally posted by rabcor View Post
    It's the same test, the first test he runs for power comparison, CB23 MT, that's convenient, I can see his results seem more or less ocnsistent with what we see in the benchmarks I found too for the M1 Pro, and the i9-12900HK seems fairly similar to the i9-11980HK;
    Didn't watch, but isn't Cinebench a Windows program? In that case, there should be an emulation tax and it won't be optimized for ARM NEON.

    Also, M2 is still an ARMv8-A architecture without SVE. Once they move to ARMv9-A, they'll have SVE2, which should benefit a program like Cinebench (particularly, if they have a native version optimized for it).

    Originally posted by rabcor View Post
    I feel it's somewhat of a shocker though still that the M2 graphics are better than the ryzen 7 6800U's; certainly aint no RTX 3090 like apple claimed tho,
    Huh? Wasn't that relating to the M1 Ultra? There were 3 M1 versions (base, Pro, and Max) and the Ultra joined two Max's into a MCM super-SoC.

    Originally posted by rabcor View Post
    but I mean AMD is a long time manufacturer of GPUs, they should have an overwhelming advantage in this area.
    It's just a question of how much die area they want to devote to an iGPU, or how big of a dGPU a laptop maker wants to include.

    Comment


    • #92
      Is there any information about Ryzen 7000 including coolers or not? I heard that these four models didn't but I can't find a source confirming it.

      Comment


      • #93
        Originally posted by atomsymbol
        Avoiding the µop cache in the future (10+ years) in any high-performance CPU capable of sustaining more than 4 instructions per clock in general-purpose workloads containing many jump instructions (irrespective of whether it is ARM or an other architecture, and irrespective of whether it is a new or an old design) is in my opinion unlikely
        Sounds like you didn't read the A715 link. They increased their decoder width to 5-wide, which was the width of the old MOP cache and one of the reasons cited for why they no longer needed it.

        Food for thought: cache lookups are generally cheap, but not free. If decode is cheap enough, then it's quite plausible that the MOP cache might not actually add any value. That's what ARM seemed to find, in the case of the A715.

        BTW, the article also mentions moving instruction-fusion into the L1 i-cache. So, it shows that you can refactor functionality in a way that makes certain orthodoxies redundant.

        Comment


        • #94
          Originally posted by jaxa View Post
          Is there any information about Ryzen 7000 including coolers or not? I heard that these four models didn't but I can't find a source confirming it.
          I don't know about that, but the funky "octopus" heat spreader design is allegedly so it can accommodate AM4 heatsinks.

          Unless I were on a tight budget, I wouldn't use a bundled cooler for a 105+ W CPU. However, if you plan to run the CPU below-spec, then that's a different story.

          Comment


          • #95
            Originally posted by atomsymbol
            Potentially, the interpretation that "A715 removed the µop cache" might turn out to be a misinterpretation. The alternative interpretation is that A715 removed the L1I cache and kept the µop cache, while the µop cache has been simplified due to the removal of support for AAarch32.
            If it doesn't store MOPs, then it's not a MOP cache. And if it were a MOP cache, then what's the 5-wide decoder doing behind it?

            Do you understand that this is information from ARM, and not people simply trying to reverse-engineer what ARM did? ARM doesn't operate in the same way as Intel, AMD, or others, regarding their disclosures. Being an IP company that's willing to license actual modifiable core IP, the information they disclose has a greater requirement for transparency and accuracy. If they say it's not a MOP cache and then it turns out it really is, they end up looking kinda foolish.

            And please actually read the article, before attempting to poke holes in it.

            Originally posted by atomsymbol
            Gracemont cores in Alder Lake (which, like A715, is a design optimized for lower power) have a potential for decoding a higher number of instructions per clock cycle (2*3=6) than A715 (1*5=5)
            No, it's not a 6-wide decoder. It's 2x 3-wide decoders. I think you haven't wrapped your head around what that means - they're probably restricted to working against different branch targets, not teaming up on a single instruction stream.

            Comment


            • #96
              Originally posted by Dukenukemx View Post
              Probably because Ryzen 7000 series are not mobile parts. These chips are focused entirely on performance with power efficiency being secondary.
              Wrong as you can see in the slides with the 65 W comparison.
              You won't find a Ryzen 7950X on a laptop.
              Maybe not the exact name but look up Dragon Range​, it's the same chiplets just for high end notebooks >= 55 W, surely clocked a little lower.

              And then there is Phoenix for <= 45 W (classic G APUs with more GPU cores) but I'm not sure if they are monolithic.

              Comment


              • #97
                Originally posted by Anux View Post
                Maybe not the exact name but look up Dragon Range​, it's the same chiplets just for high end notebooks >= 55 W, surely clocked a little lower.
                More importantly, the cores are the same between the desktop/server chiplets and the APUs.

                Also, scaling servers to high core-counts requires a significant degree of power-efficiency. 280 W for a server CPU sounds like a lot, but it equates to just 70 W per 16 cores or 35 W per 8 cores.

                Originally posted by Anux View Post
                then there is Phoenix for <= 45 W (classic G APUs with more GPU cores) but I'm not sure if they are monolithic.
                For smaller die sizes, monolithic seems to win out. Heck, I'm pretty sure even the console APUs were monolithic.

                Comment


                • #98
                  Originally posted by Dukenukemx View Post
                  Why RISC-V? It isn't relevant yet, and probably won't ever be. Not unless someone like Google picks it up and spends real money to progress it.
                  That's because if you run real world applications then the Apple M1/M2 aren't fairly any better than AMD's Rembrandt.
                  o please gaming is the only field who AMD has a win but thats not apples or ARMs fault.
                  if you are a gamer please buy a AMD Rembrand... here is no one who will claim any other than this.
                  you showed multible times that at gaming you do not want a fair comparison.

                  you compare old games compiled on x86 with an non-apple/non-WebGPU GPU API DirectX11 translated with Rosetta2...
                  thats not a fair comparison at all. (i think it is even 32bit game-engine binary and you know Apple M2 has only 64bit hardware inside and need to emulate 32bit)

                  a fair comparison is like this: you compile it for ARM 64bit and you use WebGPU or Metal GPU 'API with the game engine.
                  or at minimum you use a game with vulkan and translate vulkan to metal.

                  and after this fair benchmark you are still free to buy AMD's Rembrandt because we all know most games are legacy x86 binaries.

                  but if you do a fair comparison you will see the apple hardware will have better battery time. (not that it matter smuch if all the games are x86 binary legacy ...)
                  Phantom circuit Sequence Reducer Dyslexia

                  Comment


                  • #99
                    Originally posted by rabcor View Post
                    Thanks for this, you've opened my eyes a bit to the possibility that RISC architectures might actually not be taking over and Intel and AMD are catching up;
                    Dukenukemx​ is the same dude posting youtube videos about apple m1/m2 vs x86 gardware in this video they benchmark x86 games with directX11 translated to metall emulated in rosetta2 and i think it is 32bit binary and M2 only has 64 bit hardware.
                    then the result is apple lose big time and battery time on gaming is low.

                    he does not show you a opensource game compiled natively on a 64bit ARM binary with WebGPU or native Metall in use for the GPU API .... why should he this would ruin his show.
                    Phantom circuit Sequence Reducer Dyslexia

                    Comment


                    • Originally posted by qarium View Post
                      he does not show you a opensource game compiled natively on a 64bit ARM binary with WebGPU or native Metall in use for the GPU API .... why should he this would ruin his show.
                      Is there any OS game supporting metal? My bet is that OS games use OS APIs. There are only a hand full of games that support metal and they all have either 2D graphics or 3D graphics on the level of early 2000s games. The only exception beeing Baldurs Gate 3 which runs slower in native mode than over rosetta emulation. So I guess your highly selective benchmarks will do you no favor either.

                      Comment

                      Working...
                      X