Originally posted by creative
View Post
Announcement
Collapse
No announcement yet.
AMD Ryzen 3 3300X vs. Intel Core i3 10100 In 350+ Benchmarks
Collapse
X
-
-
Originally posted by blackshard View Post
I strongly disagree. Intel GPU become much more performant lately. It's not very useful for any kind of serious gaming, but nowadays the GPU is used for a lot of things: browsers (either by browser itself, or via WebGL for javascript apps), casual gaming, accelerated video encoding/decoding, GPGPU...
It's not dead weight at all, if you consider that most of the computers are preassembled Dell/Hp/Acer/whatever that will never see a discrete GPU.
Another important point to consider is how more feature rich Intel's hardware encoding/decoding solution is compared to AMD's of the same era. On that same Ivy Bridge (3000 series), they can decode 4k/60fps H264 videos, while AMD's are stuck at 1080p/60fps. That HD5450 can only do 30fps.
So in the comparison of the i3 10100 and the R3 3300X, if you want to top the Intel integrated solution, you have to go for a near 100 dollars dGPU solution, because all dGPUs available at 50 dollars are hopeless obsolete crap.
Comment
-
Something to keep in mind (as far as power goes) is that the Ryzen 3300 and 3300X are AMD's lowest tier of parts. The 3300 and 3300X even have different core configurations (3300 is 2 cores per ccx, while 3300X is 4 cores on one ccx). This suggests that these are basically bottom of the barrel chips that couldn't cut it as 6+ core parts. Which explains why they draw quite a bit more power per given amount of performance compared to the 6c and 8c parts.
- Likes 1
Comment
-
Originally posted by arQon View Post
I do. :P Intel isn't hurting AT ALL. Intel is still printing money, despite now either having genuine competition in, or being totally outclassed in, every segment of the market. Because "we" don't matter: it's the companies buying 100K units at a time that do. AMD has had a MASSIVE recovery in server CPUs - to LESS THAN 10% of the market. And that "massive recovery" ISN'T sarcasm: that's how utterly dominant Intel was. So yeah, they've lost an unimaginable amount ground - and yet, they STILL have over 90% of the market, and they're still posting record sales/profits every quarter.Last edited by creative; 13 June 2020, 01:46 PM.
- Likes 1
Comment
-
Originally posted by AmericanLocomotive View PostSomething to keep in mind (as far as power goes) is that the Ryzen 3300 and 3300X are AMD's lowest tier of parts. The 3300 and 3300X even have different core configurations (3300 is 2 cores per ccx, while 3300X is 4 cores on one ccx). This suggests that these are basically bottom of the barrel chips that couldn't cut it as 6+ core parts. Which explains why they draw quite a bit more power per given amount of performance compared to the 6c and 8c parts.
The 3300X though even BENEFITS from losing that second CCX in all the workloads that its target market has. So that's a chip where both the parties involved actually win out as a result. It's pretty hard to spin that as a BAD outcome.
In neither case, but especially the 3300X, a manufacturing defect that prevents the chip from "cutting it" as a 6+ core part doesn't in any way actually mean it's necessarily power-hungry (though I do agree that's LIKELY to be the case for many of the 3100s at least), or even that the part COULDN'T have ended up as e.g. a 3600. Zen2 is nearly a year old now, and the yields should be pretty good at this point - and they were already good enough a year ago to make an awful lot of R5s and R7s...
I get the feeling that you're not familiar with the concept of product segmentation. It's admittedly rather counter-intuitive, but it drives an awful lot of the semiconductor industry and you should look into it next time you're bored.
- Likes 2
Comment
-
Originally posted by arQon View Post
"Bottom of the barrel" is a bit harsh - sort of. The 3100 (which is the part you actually mean when you say 3300) is very likely a "salvage" part, sure. But so what? FFS, this is something that would have been the top of Intel's consumer lineup - an i7 - just THREE YEARS AGO, at 3x the price, and as a 95W part. I don't think there's any non-abstract reason to bitch about a few extra mV. Instead of, what? AMD throwing the chips away, increasing waste and (massively) consumer cost, and sacrificing revenue they need?
The 3300X though even BENEFITS from losing that second CCX in all the workloads that its target market has. So that's a chip where both the parties involved actually win out as a result. It's pretty hard to spin that as a BAD outcome.
In neither case, but especially the 3300X, a manufacturing defect that prevents the chip from "cutting it" as a 6+ core part doesn't in any way actually mean it's necessarily power-hungry (though I do agree that's LIKELY to be the case for many of the 3100s at least), or even that the part COULDN'T have ended up as e.g. a 3600. Zen2 is nearly a year old now, and the yields should be pretty good at this point - and they were already good enough a year ago to make an awful lot of R5s and R7s...
I get the feeling that you're not familiar with the concept of product segmentation. It's admittedly rather counter-intuitive, but it drives an awful lot of the semiconductor industry and you should look into it next time you're bored.
- Likes 1
Comment
-
Originally posted by M@GOid View Post
The 4 cores on one CCX is the reason the 3300X was able to beat the 7700k at stock speeds in most games. Now, can you imagine if AMD do the same on the rumored 3600XT and 3800XT? That has the possibility of finally removing Intel's gaming crown.
I don't think memory latency plays that much of a role given the amount of cache those chips have, and also Threadripper performing more or less the same in games even with higher memory latency.Last edited by angrypie; 14 June 2020, 10:22 AM.
Comment
-
Originally posted by angrypie View Post
So the XT line isn't just a clock bump then? For that to happen they'd need some tweaks to lower inter-core latency.
I don't think memory latency plays that much of a role given the amount of cache those chips have, and also Threadripper performing more or less the same in games even with higher memory latency.
At last some people believe the reason the 3300X is performing so well in games is the unified core arrangement (all cores in a single CCX). If that is true or not I'm not qualified to confirm.
Comment
-
Originally posted by arQon View Post*snip*
I never said they were bad chips that didn't perform, or that they were poor value.
I get the feeling that you're not familiar with the concept of product segmentation. It's admittedly rather counter-intuitive, but it drives an awful lot of the semiconductor industry and you should look into it next time you're bored.
Comment
Comment