Announcement

Collapse
No announcement yet.

AMD Announces Ryzen 7000 Series "Zen 4" Desktop CPUs - Linux Benchmarks To Come

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Anux
    replied
    Originally posted by Dukenukemx View Post
    That doesn't make it a mobile chip.
    Yes, that might be the reason no one claimed it is.
    This isn't new as this is for laptops that are called Desktop replacements. Big hulking laptops that get very hot and heavy but are less mobile but more desktop like. They also tend to cost $3k+.
    Just to remid you what you said:
    Probably because Ryzen 7000 series are not mobile parts. These chips are focused entirely on performance with power efficiency being secondary.
    Clearly power efficiency is not secondary and they are also developed for mobile plattforms. These are the exact same dies performing over 60% better at the same TDP than their predecessor. When was the last time that we had such a big jump in efficiency? Bulldozer -> Zen1 maybe?
    Those are the ones I'm talking about, and they'll be on 4nm. Don't look too much into TDP as nobody has any consistency in how to rate their CPU's. Apple's M1 Ultra is said to use 60 watts, so is that better or worse than AMD's and Intel's TDP? This is why we have benchmarks.
    They will come in different TDPs, just compare those SKUs that best match apples M1 power consumption and there you go.​

    Originally posted by qarium View Post
    in my point of view metal is not necessary for such a benchmark a vulkan to metal wrapper like MoltenVK will do the job just fine.
    also WebGPU is an even better fit this is an Open-source API...
    Are there any benchmarks for WebGPU showing what you claim?
    if you call any fairness: highly selective benchmark... then something is wrong in your brain.
    If you only allow a specific type of API that might favor your still unproven claim then thats the definition of highly selective. Nothing to do with fairness just with reality.
    it is a complete Fraud to benchmark like this.
    There is no fraud if the benchmark states clearly what it compares. You may have gotten the meaning of fair and fraud wrong, thats ok I'm not a native speaker either.

    Leave a comment:


  • Dukenukemx
    replied
    Originally posted by qarium View Post
    it is like you say: apples does not work with game developers on this subject...
    but right now in this market situation it also does not make sense for apple to do so.
    If Apple sat down and worked with game developers then developers would ask Apple to port Vulkan to Mac OSX. They would also ask Apple to make exceptions to their 32-bit legacy games, because they have no intent to rewrite their code for 64-bit.
    i am 100% sure it is not apples fault... it is still bad for the customers of course.
    this evil monopoles form microsoft and intel and nvidia they do hurt customers.
    You can include Apple in those evil monopolies.
    all what apple can do about this is to perform mitigations like rosetta2...
    And Vulkan. And 32-bit support.
    well lets say this would be a fair win for the pc.

    but your other examples are "bullshit"
    Don't worry, Resident Evil 8 will be out soon and that will be 100% native. It'll still be faster on x86.
    ​​
    intel won this evil game even 20 years ago... DEC Alpha had the better CPU it was X86 32bit vs Alpha 64bit intel was at 166mhz and Alpha was at 400mhz...

    alpha lost the game because veryone did only benchmark x86 code emulated on alpha... but with native code alpha was always faster.

    we have to stop this bullshit because like the DEC Alpha case the better solution goes down in history and the bad solution rules the world.

    Intel with this ISA war bullshit is a true evil company and we should avoid anything from them until their monopole breaks into pices.
    Intel won because Intel didn't do a 360 and walk away from a CPU architecture like Apple has done at least once a decade. On top of that, Intel had to fight AMD for x86 supremacy while every historically failed CPU architecture didn't. I'm talking MIPS, Motorola 6800, PowerPC, and even ARM as they don't have any direct competition to incentivize them to improve. No matter how awfully outdated x86 is, the dynamic of AMD vs Intel will always push the CPU architecture beyond even the most well thought out CPU designs. ARM already failed as they filed for bankruptcy and Nvidia almost bought them. That means any real ARM improvements will probably come from Apple and Qualcomm. Apple has no real competitor. Nobody can make CPU's for Mac OSX. Nobody can use Mac OSX for their systems. Apple in the early 2000's would make fake benchmarks that showed PowerPC doing better in benchmarks than Intel, which was total bullshit. Apple will do it again once AMD and Intel continue to push ahead of Apple's ARM silicon.

    As it stands right now, AMD has pushed Intel to create their own graphics and seek manufacturers like TSMC and IBM to make their CPU's. Things that is often joked about with Intel, but because Intel can lose market share to AMD, they are incentivized to push for better designs. This goes the other way as AMD screwed up with Bulldozer and they were forced to make Ryzen. Apple and ARM in general doesn't have competition, and therefore will regress in performance and power consumption. They have people like you that will back up their claims, to the bitter end. All CPU architectures do this because once it's been established, companies figure you won't go rewrite all your software just to switch CPU architectures. Intel has done this once AMD failed with Bulldozer and we were stuck with dual and quad core CPU's until Ryzen was released. This is why Apple's M1 looked so good, when both AMD and Intel were going through a transition. Apple knew that 2020 was the best time to release their M1 when AMD was playing catch up to Intel, and Intel was pretending that 10nm was good enough for anyone.

    Leave a comment:


  • qarium
    replied
    Originally posted by Dukenukemx View Post
    Yes it is, when nobody is accommodating game developers. Guess what Microsoft and Valve do very well?
    it is like you say: apples does not work with game developers on this subject...
    but right now in this market situation it also does not make sense for apple to do so.

    people like you who want to game and want to game x86 legacy closed source games they buy AMD anyway...

    Originally posted by Dukenukemx View Post
    I didn't say any such thing.
    That's Apple's fault, not mine. If it don't run, or runs slow, then that's entirely on Apple. You do know that AMD and Nvidia pay lots of money to have developers ports their games and include their features. I think Apple is doing this for Resident Evil 8, and No Man's Sky's.

    i am 100% sure it is not apples fault... it is still bad for the customers of course.
    this evil monopoles form microsoft and intel and nvidia they do hurt customers.

    all what apple can do about this is to perform mitigations like rosetta2...

    Originally posted by Dukenukemx View Post
    The best thing I can think of are emulators, and I know RPCS3 and Dolphin are ported to M1. They are still faster on PC.

    well lets say this would be a fair win for the pc.

    but your other examples are "bullshit"

    Originally posted by Dukenukemx View Post
    I'll tell you what I tell Honda S2000 owners and that's potential is great until you can't prove it.
    ​​
    if you want compare hardware fair and the result should reflext the true capabilities
    then you have to compile the software and games for the native platform and you have to use native APIs and not some emulations and wrappers over wrappers...

    intel won this evil game even 20 years ago... DEC Alpha had the better CPU it was X86 32bit vs Alpha 64bit intel was at 166mhz and Alpha was at 400mhz...

    alpha lost the game because veryone did only benchmark x86 code emulated on alpha... but with native code alpha was always faster.

    we have to stop this bullshit because like the DEC Alpha case the better solution goes down in history and the bad solution rules the world.

    Intel with this ISA war bullshit is a true evil company and we should avoid anything from them until their monopole breaks into pices.

    Leave a comment:


  • Dukenukemx
    replied
    Originally posted by qarium View Post
    o please gaming is the only field who AMD has a win but thats not apples or ARMs fault.
    Yes it is, when nobody is accommodating game developers. Guess what Microsoft and Valve do very well?
    you showed multible times that at gaming you do not want a fair comparison.
    I didn't say any such thing.
    you compare old games compiled on x86 with an non-apple/non-WebGPU GPU API DirectX11 translated with Rosetta2...
    thats not a fair comparison at all. (i think it is even 32bit game-engine binary and you know Apple M2 has only 64bit hardware inside and need to emulate 32bit)
    That's Apple's fault, not mine. If it don't run, or runs slow, then that's entirely on Apple. You do know that AMD and Nvidia pay lots of money to have developers ports their games and include their features. I think Apple is doing this for Resident Evil 8, and No Man's Sky's.
    a fair comparison is like this: you compile it for ARM 64bit and you use WebGPU or Metal GPU 'API with the game engine.
    or at minimum you use a game with vulkan and translate vulkan to metal.
    The best thing I can think of are emulators, and I know RPCS3 and Dolphin are ported to M1. They are still faster on PC.
    and after this fair benchmark you are still free to buy AMD's Rembrandt because we all know most games are legacy x86 binaries.
    This is why the meme PCMasterRace exists.

    but if you do a fair comparison you will see the apple hardware will have better battery time. (not that it matter smuch if all the games are x86 binary legacy ...)
    I'll tell you what I tell Honda S2000 owners and that's potential is great until you can't prove it.

    Leave a comment:


  • qarium
    replied
    Originally posted by Anux View Post
    Is there any OS game supporting metal? My bet is that OS games use OS APIs. There are only a hand full of games that support metal and they all have either 2D graphics or 3D graphics on the level of early 2000s games. The only exception beeing Baldurs Gate 3 which runs slower in native mode than over rosetta emulation. So I guess your highly selective benchmarks will do you no favor either.
    in my point of view metal is not necessary for such a benchmark a vulkan to metal wrapper like MoltenVK will do the job just fine.
    also WebGPU is an even better fit this is an Open-source API...

    "So I guess your highly selective benchmarks will do you no favor either."

    if you call any fairness: highly selective benchmark... then something is wrong in your brain.

    if you do not have native ARM binary and no native gpu API you do benchmark anything but not the hardware you have bought.

    and Intel dominate the market on this fraud for a long time for example DEC Alpha was much ahead of intel with native 64bit and high clock speeds like 400mhz in a time intel only had 166mhz cpus... and intel only did win because they all benchmarked the system with x86 software who did run on the DEC Alpha in emulation mode. with native software the DEC Alpha was faster with x86 emulation the nativ intel cpu was faster.

    it is a complete Fraud to benchmark like this.

    Leave a comment:


  • Dukenukemx
    replied
    Originally posted by Anux View Post
    Wrong as you can see in the slides with the 65 W comparison.
    I don't see a 7000 chip that's 65W, but I'm sure there will be. That doesn't make it a mobile chip.
    Maybe not the exact name but look up Dragon Range​, it's the same chiplets just for high end notebooks >= 55 W, surely clocked a little lower.
    This isn't new as this is for laptops that are called Desktop replacements. Big hulking laptops that get very hot and heavy but are less mobile but more desktop like. They also tend to cost $3k+.
    And then there is Phoenix for <= 45 W (classic G APUs with more GPU cores) but I'm not sure if they are monolithic.
    Those are the ones I'm talking about, and they'll be on 4nm. Don't look too much into TDP as nobody has any consistency in how to rate their CPU's. Apple's M1 Ultra is said to use 60 watts, so is that better or worse than AMD's and Intel's TDP? This is why we have benchmarks.

    Leave a comment:


  • Anux
    replied
    Originally posted by qarium View Post
    he does not show you a opensource game compiled natively on a 64bit ARM binary with WebGPU or native Metall in use for the GPU API .... why should he this would ruin his show.
    Is there any OS game supporting metal? My bet is that OS games use OS APIs. There are only a hand full of games that support metal and they all have either 2D graphics or 3D graphics on the level of early 2000s games. The only exception beeing Baldurs Gate 3 which runs slower in native mode than over rosetta emulation. So I guess your highly selective benchmarks will do you no favor either.

    Leave a comment:


  • qarium
    replied
    Originally posted by rabcor View Post
    Thanks for this, you've opened my eyes a bit to the possibility that RISC architectures might actually not be taking over and Intel and AMD are catching up;
    Dukenukemx​ is the same dude posting youtube videos about apple m1/m2 vs x86 gardware in this video they benchmark x86 games with directX11 translated to metall emulated in rosetta2 and i think it is 32bit binary and M2 only has 64 bit hardware.
    then the result is apple lose big time and battery time on gaming is low.

    he does not show you a opensource game compiled natively on a 64bit ARM binary with WebGPU or native Metall in use for the GPU API .... why should he this would ruin his show.

    Leave a comment:


  • qarium
    replied
    Originally posted by Dukenukemx View Post
    Why RISC-V? It isn't relevant yet, and probably won't ever be. Not unless someone like Google picks it up and spends real money to progress it.
    That's because if you run real world applications then the Apple M1/M2 aren't fairly any better than AMD's Rembrandt.
    o please gaming is the only field who AMD has a win but thats not apples or ARMs fault.
    if you are a gamer please buy a AMD Rembrand... here is no one who will claim any other than this.
    you showed multible times that at gaming you do not want a fair comparison.

    you compare old games compiled on x86 with an non-apple/non-WebGPU GPU API DirectX11 translated with Rosetta2...
    thats not a fair comparison at all. (i think it is even 32bit game-engine binary and you know Apple M2 has only 64bit hardware inside and need to emulate 32bit)

    a fair comparison is like this: you compile it for ARM 64bit and you use WebGPU or Metal GPU 'API with the game engine.
    or at minimum you use a game with vulkan and translate vulkan to metal.

    and after this fair benchmark you are still free to buy AMD's Rembrandt because we all know most games are legacy x86 binaries.

    but if you do a fair comparison you will see the apple hardware will have better battery time. (not that it matter smuch if all the games are x86 binary legacy ...)

    Leave a comment:


  • coder
    replied
    Originally posted by Anux View Post
    Maybe not the exact name but look up Dragon Range​, it's the same chiplets just for high end notebooks >= 55 W, surely clocked a little lower.
    More importantly, the cores are the same between the desktop/server chiplets and the APUs.

    Also, scaling servers to high core-counts requires a significant degree of power-efficiency. 280 W for a server CPU sounds like a lot, but it equates to just 70 W per 16 cores or 35 W per 8 cores.

    Originally posted by Anux View Post
    then there is Phoenix for <= 45 W (classic G APUs with more GPU cores) but I'm not sure if they are monolithic.
    For smaller die sizes, monolithic seems to win out. Heck, I'm pretty sure even the console APUs were monolithic.

    Leave a comment:

Working...
X