Originally posted by Jabberwocky
View Post
use these benchmark results with current tech instead: https://www.computerbase.de/2023-11/...eils-deutlich/
the multicore results are not really interesting because surprise surprise the chip with more cores has better result...
no the single core results is the interesting one.
"
Geekbench v6 – Single-Core
- Apple M3
• 100 % - Core i9-13900K
99 % - Snapdragon X Elite (80 W)
97 % - Apple M3 Max
96 % - Snapdragon X Elite (23 W)
90 % - Apple M2 Max
89 % - Apple M2
87 %
- Apple M3
as you see in this comparison the apple m3 chip is the fastest declared 100% the 13900K is then 99%
then we can say intel 14900K is maybe 3-4% faster than this would result in place 1 with maybe 2-3% faster than the Apple M3 chip.
but keep in mind that the 14900K has a TDP of 282watt or something like this and the Apple M3 much less than this.
"I've looked at some benchmarks for Apple's M2 (8+10) 20 billion transistors TSMC 5nm N5P vs AMD 6800U (zen3+ rdna2) 13 billion transistors TSMC 6 nm FinFET in an Asus Zenbook S13. The results doesn't match the hype IMO."
your comparison is only correct if you focus on multicore performance. because of course in single core performance the AMD 6800U has no change what so ever agaist a Apple M3/Core i9-13900K
/Snapdragon X Elite or a 14900K
you talk about (zen3+ rdna2) and even if we talk about zen4+RDNA3 you either have a low performance APU or else a iGPU+dGPU combination and in this case lets say AI workloads are the most important point today then unified memory model of course beats the vram of the dGPU because even if you talk about a AMD PRO W7900 with 48gb vram with apple unified memory model you have 128GB of vram for your AI workloads....
and AMD is not yet in the game with BIG-APUs with mega size of unified memory...
Originally posted by Jabberwocky
View Post
and if i would have a notebook and i would need performance for AI workloads i would prever to SSH into a workstation or server instead of a more powerfull notebook.
can you please stop talk about apple M2 because no one cares about this old history lessions anymore you can buy a Apple M3...
I do not have a apple m3 and i will not buy one LOL...
"The M2 has a better GPU hands down,"
M1/M2 did not have raytracing support and also had no VP9/AV1 decode or encode
this fact alone shows you that for linux and opensource an AMD GPU is better then you have AV1 decode and raytracing support.
by the way the AMD GPU has opensource drivers and the Apple M1/M2 only have reverse-engineered official-unsupported driver with right now only openGL support...
M3 now they claim has raytracing support and AV1 decode but no AV1 encode ?
m2 vs 6800U
i honestly do not care so much who is slower or faster.
one has good opensource drivers for the GPU
and also AV1 decode support
and also official raytracing support for the GPU...
and the apple m2 not not have all of this so who cares who is faster ?
apple m2 could have the double performance and i would not buy it.
Originally posted by Jabberwocky
View Post
- who cares the apple M2 does not have VP9/AV1 decode
Originally posted by Jabberwocky View Post
who cares that intel 14900K is maybe 4% faster in single core than a 13900K?
in my point of view multicore is not very intersting because you can always put in more cores to get more multicore performance.
can you explain to me how exactly intel want to go from TDP 283watt to only TDP 30 watt with the same performance ?
Originally posted by Jabberwocky View Post
the end-consumer does not care at all if it is the 8-wide decode or the 10nm vs 3nm part.
the end-consumer only does see that intel has a big problem.
intel right now has a 6-wide decode design... of course they can go to 8-wide decode the point is they need more tranistors for this step because flexible wide design results in more tranistors used than a fixed decode wideness.
intel can not afford wasting more tranistors because their 10nm node can not handle it.
Originally posted by Jabberwocky
View Post
and intel's 10nm node can not handle more transistors...
the ARM 8-wide decode design compared to a flexible wide design saves tranistors because you have less possibilities.
intel in the past did avoid going from 4 wide to 6 wide decode design and intead used hyperthreating to get more multicore performance. for multicore performance you do not need this.
if intel could go from 10nm to 3nm then they could spend more tranistors to handle this complexity at 8-wide decode...
Originally posted by Jabberwocky
View Post
"I'll be happy to admit that I'm wrong."
you will never admit it because you can always claim it is 10nm vs 3nm and not the ISA or cpu design.
Originally posted by Jabberwocky
View Post
then keep in mind Snapdragon Elite X is only at 4nm and the Apple M3 is at 3nm... so a Snapdragon Elite X 2.0 on 3nm would be even better.
Originally posted by Jabberwocky
View Post
and your argument here ... i do not see any change what so ever for microsoft and closed source software like Adobe software in general. it is technically impossible to get what you want or what we want with closed source software or a closed source operating system. as you say apple can only do this because they control all and only if you control all like apple you can do this with closed source. this means Microsoft is no real competitor to linux we can easily beat microsoft and it does already happen with valve steam deck.
honestly i do not unterstand apple long time ago apple also had servers and they lost the server market comletely now they could easily officially support linux on M1/M2/M3 but they choose not to do this.
also Apple is MPEG LA member and for this fact alone "evil" and in apple M1/M2 they implemented all the closed source and patented video codexes but for opensource video codexes they only adobt it very slowly .. why ?
you always talk about 6800U but AMDs 7000 series mobile chips honestly look good with AV1 encode support and the 6000 series only has decode support.
for linux the apple products are not ready... faster or better battery time does not matter in this case
Originally posted by Jabberwocky
View Post
RISC-V is just a distraction from real free and fast solutions like OpenPOWER.
i have never seen any benchmark result of a fast RISC-V chip they clearly do not exist.
"I hope ARM and RISC-V find more ways of improving over x86 and not in nonobjective hype tactics."
do you remember the time when the intel cpus and amd cpus where produces in 14nm node ?
at this time power9 cpus produced at 14nm did defeat all the X86 chips in single-core performance.
just to make an example how pointless X86 is.
after this time you can no longer compare the cpus because one is at 10nm the other is at 3nm and IBM Power10 is at 7nm
and any difference in the production node would result you in cry: "nonobjective hype tactics"
"see Strix Point in 2024 due to Windows 11 AI requirements"
what are these requirements ? last time i checked amds inference accelerators where 8 bit integer
but minifloats like 4bit floating point and 6bit floating point and 8bit floating point clearly beats 8bit integer.
this means the hardware company who manage to include 4/6/8bit floating point and 8bit integer will win this.
Comment