Announcement

Collapse
No announcement yet.

AMD Ryzen 3 3300X vs. Intel Core i3 10100 In 350+ Benchmarks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by creative View Post
    I don't know if Intel is hurting that much
    I do. :P Intel isn't hurting AT ALL. Intel is still printing money, despite now either having genuine competition in, or being totally outclassed in, every segment of the market. Because "we" don't matter: it's the companies buying 100K units at a time that do. AMD has had a MASSIVE recovery in server CPUs - to LESS THAN 10% of the market. And that "massive recovery" ISN'T sarcasm: that's how utterly dominant Intel was. So yeah, they've lost an unimaginable amount ground - and yet, they STILL have over 90% of the market, and they're still posting record sales/profits every quarter.

    Comment


    • #32
      Originally posted by blackshard View Post

      I strongly disagree. Intel GPU become much more performant lately. It's not very useful for any kind of serious gaming, but nowadays the GPU is used for a lot of things: browsers (either by browser itself, or via WebGL for javascript apps), casual gaming, accelerated video encoding/decoding, GPGPU...
      It's not dead weight at all, if you consider that most of the computers are preassembled Dell/Hp/Acer/whatever that will never see a discrete GPU.
      I agree with you. Intel's graphic solution is a lot better than some people think of it. For example, I have a AMD HD5450 card that is used for diagnose and backup purposes, and the iGPU of a slight newer Ivy Bridge is better than it. I was surprised when I tested that card and discovered that the performance was worse than Intel's iGPU.

      Another important point to consider is how more feature rich Intel's hardware encoding/decoding solution is compared to AMD's of the same era. On that same Ivy Bridge (3000 series), they can decode 4k/60fps H264 videos, while AMD's are stuck at 1080p/60fps. That HD5450 can only do 30fps.

      So in the comparison of the i3 10100 and the R3 3300X, if you want to top the Intel integrated solution, you have to go for a near 100 dollars dGPU solution, because all dGPUs available at 50 dollars are hopeless obsolete crap.

      Comment


      • #33
        Something to keep in mind (as far as power goes) is that the Ryzen 3300 and 3300X are AMD's lowest tier of parts. The 3300 and 3300X even have different core configurations (3300 is 2 cores per ccx, while 3300X is 4 cores on one ccx). This suggests that these are basically bottom of the barrel chips that couldn't cut it as 6+ core parts. Which explains why they draw quite a bit more power per given amount of performance compared to the 6c and 8c parts.

        Comment


        • #34
          Originally posted by Delgarde View Post

          Deep pockets are helpful, but they're not everything — it still takes time to turn money into a competitive product.
          Yes it does take time.

          Comment


          • #35
            Originally posted by arQon View Post

            I do. :P Intel isn't hurting AT ALL. Intel is still printing money, despite now either having genuine competition in, or being totally outclassed in, every segment of the market. Because "we" don't matter: it's the companies buying 100K units at a time that do. AMD has had a MASSIVE recovery in server CPUs - to LESS THAN 10% of the market. And that "massive recovery" ISN'T sarcasm: that's how utterly dominant Intel was. So yeah, they've lost an unimaginable amount ground - and yet, they STILL have over 90% of the market, and they're still posting record sales/profits every quarter.
            Well they are not getting any of my money anytime soon. Right now at least for me processor monitoring on Ryzen is actually better than the monitoring that I had on intel in Linux. Zenmonitor is pretty incredible compared to the monitoring on my i7 which is collecting dust and on standby as an emergency backup as a cpu&cooler+motherboard+ram auxiliary drop-in. Zenmonitor shows everything for the processor which is total watt per package, watt per core, and all the core frequencies with thermals along with chip voltages. It even shows amps pulled.
            Last edited by creative; 13 June 2020, 01:46 PM.

            Comment


            • #36
              Originally posted by AmericanLocomotive View Post
              Something to keep in mind (as far as power goes) is that the Ryzen 3300 and 3300X are AMD's lowest tier of parts. The 3300 and 3300X even have different core configurations (3300 is 2 cores per ccx, while 3300X is 4 cores on one ccx). This suggests that these are basically bottom of the barrel chips that couldn't cut it as 6+ core parts. Which explains why they draw quite a bit more power per given amount of performance compared to the 6c and 8c parts.
              "Bottom of the barrel" is a bit harsh - sort of. The 3100 (which is the part you actually mean when you say 3300) is very likely a "salvage" part, sure. But so what? FFS, this is something that would have been the top of Intel's consumer lineup - an i7 - just THREE YEARS AGO, at 3x the price, and as a 95W part. I don't think there's any non-abstract reason to bitch about a few extra mV. Instead of, what? AMD throwing the chips away, increasing waste and (massively) consumer cost, and sacrificing revenue they need?

              The 3300X though even BENEFITS from losing that second CCX in all the workloads that its target market has. So that's a chip where both the parties involved actually win out as a result. It's pretty hard to spin that as a BAD outcome.

              In neither case, but especially the 3300X, a manufacturing defect that prevents the chip from "cutting it" as a 6+ core part doesn't in any way actually mean it's necessarily power-hungry (though I do agree that's LIKELY to be the case for many of the 3100s at least), or even that the part COULDN'T have ended up as e.g. a 3600. Zen2 is nearly a year old now, and the yields should be pretty good at this point - and they were already good enough a year ago to make an awful lot of R5s and R7s...

              I get the feeling that you're not familiar with the concept of product segmentation. It's admittedly rather counter-intuitive, but it drives an awful lot of the semiconductor industry and you should look into it next time you're bored.

              Comment


              • #37
                Originally posted by arQon View Post

                "Bottom of the barrel" is a bit harsh - sort of. The 3100 (which is the part you actually mean when you say 3300) is very likely a "salvage" part, sure. But so what? FFS, this is something that would have been the top of Intel's consumer lineup - an i7 - just THREE YEARS AGO, at 3x the price, and as a 95W part. I don't think there's any non-abstract reason to bitch about a few extra mV. Instead of, what? AMD throwing the chips away, increasing waste and (massively) consumer cost, and sacrificing revenue they need?

                The 3300X though even BENEFITS from losing that second CCX in all the workloads that its target market has. So that's a chip where both the parties involved actually win out as a result. It's pretty hard to spin that as a BAD outcome.

                In neither case, but especially the 3300X, a manufacturing defect that prevents the chip from "cutting it" as a 6+ core part doesn't in any way actually mean it's necessarily power-hungry (though I do agree that's LIKELY to be the case for many of the 3100s at least), or even that the part COULDN'T have ended up as e.g. a 3600. Zen2 is nearly a year old now, and the yields should be pretty good at this point - and they were already good enough a year ago to make an awful lot of R5s and R7s...

                I get the feeling that you're not familiar with the concept of product segmentation. It's admittedly rather counter-intuitive, but it drives an awful lot of the semiconductor industry and you should look into it next time you're bored.
                The 4 cores on one CCX is the reason the 3300X was able to beat the 7700k at stock speeds in most games. Now, can you imagine if AMD do the same on the rumored 3600XT and 3800XT? That has the possibility of finally removing Intel's gaming crown.

                Comment


                • #38
                  Originally posted by M@GOid View Post

                  The 4 cores on one CCX is the reason the 3300X was able to beat the 7700k at stock speeds in most games. Now, can you imagine if AMD do the same on the rumored 3600XT and 3800XT? That has the possibility of finally removing Intel's gaming crown.
                  So the XT line isn't just a clock bump then? For that to happen they'd need some tweaks to lower inter-core latency.

                  I don't think memory latency plays that much of a role given the amount of cache those chips have, and also Threadripper performing more or less the same in games even with higher memory latency.
                  Last edited by angrypie; 14 June 2020, 10:22 AM.

                  Comment


                  • #39
                    Originally posted by angrypie View Post

                    So the XT line isn't just a clock bump then? For that to happen they'd need some tweaks to lower inter-core latency.

                    I don't think memory latency plays that much of a role given the amount of cache those chips have, and also Threadripper performing more or less the same in games even with higher memory latency.
                    No, I said imagine if they did to the XT line what they did to the 3300X.

                    At last some people believe the reason the 3300X is performing so well in games is the unified core arrangement (all cores in a single CCX). If that is true or not I'm not qualified to confirm.

                    Comment


                    • #40
                      Originally posted by arQon View Post
                      *snip*
                      I'm really not sure how you got that I was saying this is a "bad" product from my post. I was directly addressing some people commenting about the 3300X's power consumption relative to the "more-cored" chips. It clearly doesn't offer the same per/watt as the 6c, 8c, 12c and 16c parts. The most obvious explanation is that they're largely salvaged dies and don't have a great v/f curve compared to some of the more highly binned parts. So they require a bit more voltage to get the desired performance. It's also likely why it took nearly a year for these chips to show up. If you have good yields, it's not a great idea financially to bin perfectly functional 6c and 8c parts down to $100 4-core parts (although such things do usually start happening late in a CPU's life-cycle when yields are extremely good). So you need to stock-pile your cut-down 4-core dies over a long period of time until you have enough to launch product.

                      I never said they were bad chips that didn't perform, or that they were poor value.
                      I get the feeling that you're not familiar with the concept of product segmentation. It's admittedly rather counter-intuitive, but it drives an awful lot of the semiconductor industry and you should look into it next time you're bored.
                      I'm well aware of the concept of product segmentation, and fully understand why these products exist.

                      Comment

                      Working...
                      X