Announcement

Collapse
No announcement yet.

AMD Announces Ryzen 7000 Series "Zen 4" Desktop CPUs - Linux Benchmarks To Come

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by Anux View Post
    Wrong as you can see in the slides with the 65 W comparison.
    I don't see a 7000 chip that's 65W, but I'm sure there will be. That doesn't make it a mobile chip.
    Maybe not the exact name but look up Dragon Range​, it's the same chiplets just for high end notebooks >= 55 W, surely clocked a little lower.
    This isn't new as this is for laptops that are called Desktop replacements. Big hulking laptops that get very hot and heavy but are less mobile but more desktop like. They also tend to cost $3k+.
    And then there is Phoenix for <= 45 W (classic G APUs with more GPU cores) but I'm not sure if they are monolithic.
    Those are the ones I'm talking about, and they'll be on 4nm. Don't look too much into TDP as nobody has any consistency in how to rate their CPU's. Apple's M1 Ultra is said to use 60 watts, so is that better or worse than AMD's and Intel's TDP? This is why we have benchmarks.

    Comment


    • Originally posted by Anux View Post
      Is there any OS game supporting metal? My bet is that OS games use OS APIs. There are only a hand full of games that support metal and they all have either 2D graphics or 3D graphics on the level of early 2000s games. The only exception beeing Baldurs Gate 3 which runs slower in native mode than over rosetta emulation. So I guess your highly selective benchmarks will do you no favor either.
      in my point of view metal is not necessary for such a benchmark a vulkan to metal wrapper like MoltenVK will do the job just fine.
      also WebGPU is an even better fit this is an Open-source API...

      "So I guess your highly selective benchmarks will do you no favor either."

      if you call any fairness: highly selective benchmark... then something is wrong in your brain.

      if you do not have native ARM binary and no native gpu API you do benchmark anything but not the hardware you have bought.

      and Intel dominate the market on this fraud for a long time for example DEC Alpha was much ahead of intel with native 64bit and high clock speeds like 400mhz in a time intel only had 166mhz cpus... and intel only did win because they all benchmarked the system with x86 software who did run on the DEC Alpha in emulation mode. with native software the DEC Alpha was faster with x86 emulation the nativ intel cpu was faster.

      it is a complete Fraud to benchmark like this.
      Phantom circuit Sequence Reducer Dyslexia

      Comment


      • Originally posted by qarium View Post
        o please gaming is the only field who AMD has a win but thats not apples or ARMs fault.
        Yes it is, when nobody is accommodating game developers. Guess what Microsoft and Valve do very well?
        you showed multible times that at gaming you do not want a fair comparison.
        I didn't say any such thing.
        you compare old games compiled on x86 with an non-apple/non-WebGPU GPU API DirectX11 translated with Rosetta2...
        thats not a fair comparison at all. (i think it is even 32bit game-engine binary and you know Apple M2 has only 64bit hardware inside and need to emulate 32bit)
        That's Apple's fault, not mine. If it don't run, or runs slow, then that's entirely on Apple. You do know that AMD and Nvidia pay lots of money to have developers ports their games and include their features. I think Apple is doing this for Resident Evil 8, and No Man's Sky's.
        a fair comparison is like this: you compile it for ARM 64bit and you use WebGPU or Metal GPU 'API with the game engine.
        or at minimum you use a game with vulkan and translate vulkan to metal.
        The best thing I can think of are emulators, and I know RPCS3 and Dolphin are ported to M1. They are still faster on PC.
        and after this fair benchmark you are still free to buy AMD's Rembrandt because we all know most games are legacy x86 binaries.
        This is why the meme PCMasterRace exists.

        but if you do a fair comparison you will see the apple hardware will have better battery time. (not that it matter smuch if all the games are x86 binary legacy ...)
        I'll tell you what I tell Honda S2000 owners and that's potential is great until you can't prove it.

        Comment


        • Originally posted by Dukenukemx View Post
          Yes it is, when nobody is accommodating game developers. Guess what Microsoft and Valve do very well?
          it is like you say: apples does not work with game developers on this subject...
          but right now in this market situation it also does not make sense for apple to do so.

          people like you who want to game and want to game x86 legacy closed source games they buy AMD anyway...

          Originally posted by Dukenukemx View Post
          I didn't say any such thing.
          That's Apple's fault, not mine. If it don't run, or runs slow, then that's entirely on Apple. You do know that AMD and Nvidia pay lots of money to have developers ports their games and include their features. I think Apple is doing this for Resident Evil 8, and No Man's Sky's.

          i am 100% sure it is not apples fault... it is still bad for the customers of course.
          this evil monopoles form microsoft and intel and nvidia they do hurt customers.

          all what apple can do about this is to perform mitigations like rosetta2...

          Originally posted by Dukenukemx View Post
          The best thing I can think of are emulators, and I know RPCS3 and Dolphin are ported to M1. They are still faster on PC.

          well lets say this would be a fair win for the pc.

          but your other examples are "bullshit"

          Originally posted by Dukenukemx View Post
          I'll tell you what I tell Honda S2000 owners and that's potential is great until you can't prove it.
          ​​
          if you want compare hardware fair and the result should reflext the true capabilities
          then you have to compile the software and games for the native platform and you have to use native APIs and not some emulations and wrappers over wrappers...

          intel won this evil game even 20 years ago... DEC Alpha had the better CPU it was X86 32bit vs Alpha 64bit intel was at 166mhz and Alpha was at 400mhz...

          alpha lost the game because veryone did only benchmark x86 code emulated on alpha... but with native code alpha was always faster.

          we have to stop this bullshit because like the DEC Alpha case the better solution goes down in history and the bad solution rules the world.

          Intel with this ISA war bullshit is a true evil company and we should avoid anything from them until their monopole breaks into pices.
          Phantom circuit Sequence Reducer Dyslexia

          Comment


          • Originally posted by qarium View Post
            it is like you say: apples does not work with game developers on this subject...
            but right now in this market situation it also does not make sense for apple to do so.
            If Apple sat down and worked with game developers then developers would ask Apple to port Vulkan to Mac OSX. They would also ask Apple to make exceptions to their 32-bit legacy games, because they have no intent to rewrite their code for 64-bit.
            i am 100% sure it is not apples fault... it is still bad for the customers of course.
            this evil monopoles form microsoft and intel and nvidia they do hurt customers.
            You can include Apple in those evil monopolies.
            all what apple can do about this is to perform mitigations like rosetta2...
            And Vulkan. And 32-bit support.
            well lets say this would be a fair win for the pc.

            but your other examples are "bullshit"
            Don't worry, Resident Evil 8 will be out soon and that will be 100% native. It'll still be faster on x86.
            ​​
            intel won this evil game even 20 years ago... DEC Alpha had the better CPU it was X86 32bit vs Alpha 64bit intel was at 166mhz and Alpha was at 400mhz...

            alpha lost the game because veryone did only benchmark x86 code emulated on alpha... but with native code alpha was always faster.

            we have to stop this bullshit because like the DEC Alpha case the better solution goes down in history and the bad solution rules the world.

            Intel with this ISA war bullshit is a true evil company and we should avoid anything from them until their monopole breaks into pices.
            Intel won because Intel didn't do a 360 and walk away from a CPU architecture like Apple has done at least once a decade. On top of that, Intel had to fight AMD for x86 supremacy while every historically failed CPU architecture didn't. I'm talking MIPS, Motorola 6800, PowerPC, and even ARM as they don't have any direct competition to incentivize them to improve. No matter how awfully outdated x86 is, the dynamic of AMD vs Intel will always push the CPU architecture beyond even the most well thought out CPU designs. ARM already failed as they filed for bankruptcy and Nvidia almost bought them. That means any real ARM improvements will probably come from Apple and Qualcomm. Apple has no real competitor. Nobody can make CPU's for Mac OSX. Nobody can use Mac OSX for their systems. Apple in the early 2000's would make fake benchmarks that showed PowerPC doing better in benchmarks than Intel, which was total bullshit. Apple will do it again once AMD and Intel continue to push ahead of Apple's ARM silicon.

            As it stands right now, AMD has pushed Intel to create their own graphics and seek manufacturers like TSMC and IBM to make their CPU's. Things that is often joked about with Intel, but because Intel can lose market share to AMD, they are incentivized to push for better designs. This goes the other way as AMD screwed up with Bulldozer and they were forced to make Ryzen. Apple and ARM in general doesn't have competition, and therefore will regress in performance and power consumption. They have people like you that will back up their claims, to the bitter end. All CPU architectures do this because once it's been established, companies figure you won't go rewrite all your software just to switch CPU architectures. Intel has done this once AMD failed with Bulldozer and we were stuck with dual and quad core CPU's until Ryzen was released. This is why Apple's M1 looked so good, when both AMD and Intel were going through a transition. Apple knew that 2020 was the best time to release their M1 when AMD was playing catch up to Intel, and Intel was pretending that 10nm was good enough for anyone.

            Comment


            • Originally posted by Dukenukemx View Post
              That doesn't make it a mobile chip.
              Yes, that might be the reason no one claimed it is.
              This isn't new as this is for laptops that are called Desktop replacements. Big hulking laptops that get very hot and heavy but are less mobile but more desktop like. They also tend to cost $3k+.
              Just to remid you what you said:
              Probably because Ryzen 7000 series are not mobile parts. These chips are focused entirely on performance with power efficiency being secondary.
              Clearly power efficiency is not secondary and they are also developed for mobile plattforms. These are the exact same dies performing over 60% better at the same TDP than their predecessor. When was the last time that we had such a big jump in efficiency? Bulldozer -> Zen1 maybe?
              Those are the ones I'm talking about, and they'll be on 4nm. Don't look too much into TDP as nobody has any consistency in how to rate their CPU's. Apple's M1 Ultra is said to use 60 watts, so is that better or worse than AMD's and Intel's TDP? This is why we have benchmarks.
              They will come in different TDPs, just compare those SKUs that best match apples M1 power consumption and there you go.​

              Originally posted by qarium View Post
              in my point of view metal is not necessary for such a benchmark a vulkan to metal wrapper like MoltenVK will do the job just fine.
              also WebGPU is an even better fit this is an Open-source API...
              Are there any benchmarks for WebGPU showing what you claim?
              if you call any fairness: highly selective benchmark... then something is wrong in your brain.
              If you only allow a specific type of API that might favor your still unproven claim then thats the definition of highly selective. Nothing to do with fairness just with reality.
              it is a complete Fraud to benchmark like this.
              There is no fraud if the benchmark states clearly what it compares. You may have gotten the meaning of fair and fraud wrong, thats ok I'm not a native speaker either.

              Comment


              • Originally posted by Dukenukemx View Post
                If Apple sat down and worked with game developers then developers would ask Apple to port Vulkan to Mac OSX.

                this makes no sense and you still did not do your homework about WebGPU if WebGPU becomes the defacto standard inside the browser and outside the browser on any platform and on any system then your words are complete bullshit.

                also again for you: Vulkan bytecode is compatible with WebGPU,,.,. but i am sure you read what you want and you don't care at all and you still talk stuff who is in fact the proof that you do not unterstand what people talks to you.

                the fight between metal and vulkan is already be obsolete because WebGPU is the standard,

                Originally posted by Dukenukemx View Post
                They would also ask Apple to make exceptions to their 32-bit legacy games, because they have no intent to rewrite their code for 64-bit.
                if only the engine of the game is opensource then the games can be ported. also you can run 32bit on 64bit ARM cpus no problem.

                Originally posted by Dukenukemx View Post
                You can include Apple in those evil monopolies.
                I told you 1000 times that Apple has not a single monopole. they do not have any relevant marketshare in any field to have any monopole.
                i unterstand you hate apple for what they do and you hate apple for what kind of people they attract ... ok fine don't buy any apple product like i do...

                but stop lie and tell people apple has a monopole ... this is insanity they have ZERO monopoles

                they also do not have your creative word creation: Isolated monopole... LOL they do not even have this.
                you just don't buy apple product or buy m1/m2 and run linux on it and then magic magic apple can not lock you into their walled garden ...

                Originally posted by Dukenukemx View Post
                And Vulkan. And 32-bit support.
                they do not want vulkan because they want WebGPU

                they do not want 32bit in their hardware because their 64bit execution units can perform 32bit operations.

                Originally posted by Dukenukemx View Post
                Don't worry, Resident Evil 8 will be out soon and that will be 100% native. It'll still be faster on x86.
                Well then lets wait and see... and you should get the point that apple does not even want to be "faster"

                apple instead try to be "faster per watt" what is a different metric than just "faster"

                Originally posted by Dukenukemx View Post
                Intel won because Intel didn't do a 360 and walk away from a CPU architecture like Apple has done at least once a decade. On top of that, Intel had to fight AMD for x86 supremacy while every historically failed CPU architecture didn't. I'm talking MIPS, Motorola 6800, PowerPC, and even ARM as they don't have any direct competition to incentivize them to improve. No matter how awfully outdated x86 is, the dynamic of AMD vs Intel will always push the CPU architecture beyond even the most well thought out CPU designs.
                Intel manage their monopole only because of the legacy closed source cruft not because they "push the CPU architecture beyond even the most well thought out CPU designs."

                just to make an example in 14nm IBM Power9 has a higher single-threat performance than the fastest intel cpu at 14nm...
                but the market does not buy IBM power9 even not if the cpus are cheaper because many people like you want to play x86 legacy closed source apps.

                Another example was the DEC Alpha case they where lightyears in front of intel but the market instead of 400mhz vs 166mhz and 32bit vs 64 bit demanted x86 closed source legacy cpu instead.

                technologically intel already lost the CPU race for years now their server CPUs lose most of the benchmark to Altra 128core ARM cpu... intel lost the Android smartphone market ... in most cases apple M2 is faster in single-thread performance than intel cpus...

                their monopole in near future will break and intel will go down.

                Originally posted by Dukenukemx View Post
                ARM already failed as they filed for bankruptcy and Nvidia almost bought them. That means any real ARM improvements will probably come from Apple and Qualcomm. Apple has no real competitor. Nobody can make CPU's for Mac OSX. Nobody can use Mac OSX for their systems. Apple in the early 2000's would make fake benchmarks that showed PowerPC doing better in benchmarks than Intel, which was total bullshit. Apple will do it again once AMD and Intel continue to push ahead of Apple's ARM silicon.
                We do not even need ARM we will have RISC-V and OpenPOWER no problem ARM can fill bankruptcy i dont care.
                "any real ARM improvements will probably come from Apple and Qualcomm."
                i have no problem with that. do you have a problem with that ?

                "Nobody can make CPU's for Mac OSX."

                who cares ? do you want MacOSX with another cpu ?

                "Nobody can use Mac OSX for their systems."

                who cares ? do you want MacOSX? i don't and most phoronix.com user of course do not want it.

                "Apple in the early 2000's would make fake benchmarks that showed PowerPC doing better in benchmarks than Intel, which was total bullshit. Apple will do it again once AMD and Intel continue to push ahead of Apple's ARM silicon."

                Intel is the _King of faking benchmarks... and paid a lot of money in court because of this.

                Originally posted by Dukenukemx View Post
                As it stands right now, AMD has pushed Intel to create their own graphics and seek manufacturers like TSMC and IBM to make their CPU's.
                no this was not AMD... this was Nvidia with their CUDA solutions in the Data-Center market because they outgunned the intel solutions in the server market.

                intel did quit the Nvidia IP in GPUs and then did license AMD GPU IP in ither words the 2 losers in the GPU market AMD+Intel did team up agaist the winner:Nvidia..

                intel for many years did invest in paying their share holders high amount of money instead of reinvest the money in new nodes...
                that intel now need to buy TSMC and IBM nodes is only the damange they did to themself. really man.


                Originally posted by Dukenukemx View Post
                Things that is often joked about with Intel, but because Intel can lose market share to AMD, they are incentivized to push for better designs. This goes the other way as AMD screwed up with Bulldozer and they were forced to make Ryzen.
                what a joke your talk the 4FMA unit in the bulldozer produced in 34nm is still faster than anything intel has even today.

                and intel only has the inferior 3FMA who loses one calulation number who can not be used anymore for future calculations.

                3FMA is one of the biggest damange in the ISA war to today designs... event he most modern versions of intel cpus can only do 3FMA and not 4FMA... we can speculate why intel did this maybe they do not want to pay for patents to IBM who invested it but i think this is in this case not the case because intel did this to sapotage AMD-Bulldozer and future AMD designs...

                modern today AMD CPUs only support 3FMA because of the compatibility with intel but if you send illegal-ISA code to the CPU in 4FMA the cpu will execute it. this means because of ISA war AMD is not even able to display the 4FMA support in the ISA..

                also you claim Bulldozer was a failed design and intel was superior but why did intel sapotage AMD so hard ?

                why did intel sapotage SSE4.0 ? you do not need to answer this we all know it because intel is EVIL,,,

                Software and Games who used the intel compiler did detect non-intel cpu and forced SSE3 as highest standard.
                without utilize 4FMA without utilize SSE4.0 and so one and so one.
                only if the intel compiler did detect intel cpu it did enable SSE4.1 and 3FMA.

                Originally posted by Dukenukemx View Post

                Apple and ARM in general doesn't have competition, and therefore will regress in performance and power consumption. They have people like you that will back up their claims, to the bitter end. All CPU architectures do this because once it's been established, companies figure you won't go rewrite all your software just to switch CPU architectures. Intel has done this once AMD failed with Bulldozer and we were stuck with dual and quad core CPU's until Ryzen was released. This is why Apple's M1 looked so good, when both AMD and Intel were going through a transition. Apple knew that 2020 was the best time to release their M1 when AMD was playing catch up to Intel, and Intel was pretending that 10nm was good enough for anyone.
                i had a friend back then an real intel fanboy... he had 3770 and 4770 intel cpu... we did comparison with my bulldozer cpu multible times and also 6 years later after the release...

                for example we did Monero mining benchmark and the Bullozer 8320/8350 won big time.
                in games with modern technology vulkan/dx12 like ashes of singularity the bulldozer had beter FPS than the 3770/4770 intel cpus.
                bulldozer also had higher 7zip score...

                microsoft did sapotage bulldozer by not supporting the CMT (competitor to hyperthreading0)
                because of this AMD bulldozer was faster on linux and slower on windows.

                GCC and LLVM supported 4FMA and bulldozer was up to 56 times faster than any intel cpu..

                "Apple knew that 2020 was the best time to release their M1 when AMD was playing catch up to Intel, and Intel was pretending that 10nm was good enough for anyone"

                you think Apple will not evolve? i am sure they will evolve you will have 3/4nm apple M3 in like 6-8 months.

                "Intel was pretending that 10nm was good enough for anyone."

                intel wasted so many opportunities in the ISA war like 4FMA that it is magic what they did with their 10nm node...

                really they could have had much better cpus on their 10nm node without ISA war. like 4FMA and other examples.


                Phantom circuit Sequence Reducer Dyslexia

                Comment


                • Originally posted by Anux View Post
                  Are there any benchmarks for WebGPU showing what you claim?
                  If you only allow a specific type of API that might favor your still unproven claim then thats the definition of highly selective. Nothing to do with fairness just with reality.
                  There is no fraud if the benchmark states clearly what it compares. You may have gotten the meaning of fair and fraud wrong, thats ok I'm not a native speaker either.
                  the problem here is what does "reality" mean... in reality the word "reality" means that there is no fairness at all.

                  in reality fraud is ruling the world. intel cheats and do fraud sience 40 years and intel is an convicted felon​

                  "Are there any benchmarks for WebGPU showing what you claim?"

                  thats an interesting question but there is no data about this yet.
                  Phantom circuit Sequence Reducer Dyslexia

                  Comment


                  • Originally posted by Anux View Post
                    Clearly power efficiency is not secondary and they are also developed for mobile plattforms. These are the exact same dies performing over 60% better at the same TDP than their predecessor. When was the last time that we had such a big jump in efficiency? Bulldozer -> Zen1 maybe?
                    The 7000 chips will clock high, which you don't want to do on mobile chips due to power efficiency. Also, AMD now manufactures their mobile chips with better manufacturing. The 5000 series is 7nm while the 6000 mobile series is on 6nm. You'll see this with Zen4 as the desktop chips will be 5nm and the laptop chips will be 4nm. Willing to bet Intel will do this too with their 7nm for desktops and TSMC's 3nm for their laptops.

                    Comment


                    • Originally posted by Dukenukemx View Post
                      The 7000 chips will clock high, which you don't want to do on mobile chips due to power efficiency. Also, AMD now manufactures their mobile chips with better manufacturing. The 5000 series is 7nm while the 6000 mobile series is on 6nm. You'll see this with Zen4 as the desktop chips will be 5nm and the laptop chips will be 4nm. Willing to bet Intel will do this too with their 7nm for desktops and TSMC's 3nm for their laptops.
                      you can not say that ryzen 7000 is on 5nm... because the IO-chip+GPU die is in 7/6nm

                      so if you make a comparison between ryzen 7000 and apple m2 you can still claim it is not true 5nm...
                      Phantom circuit Sequence Reducer Dyslexia

                      Comment

                      Working...
                      X