Announcement

Collapse
No announcement yet.

Ryzen 9 3900X/3950X vs. Core i9 10900K In 380+ Benchmarks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    Well but "the comparable" R5 3600 does achieve 3733 points in Cinebench in the linked review while i5 10400F about 3180 points. AMD scores 17% more for about 8% more power, in other terms AMD should be around 8% more efficient, this can be also seen on page 18 of said review. But I have to say i wouldn't mind having any of the two cpus, since my notebook is running using ancient i3 3110M ... Intel has really squeezed so much from the 14nm node it's amazing.

    Comment


    • #52
      Originally posted by Michael View Post
      Embree
      Binary: Pathtracer - Model: Crown
      Core i9 10900K: 2 Minutes 40 40 40s
      Ryzen 9 3900X: 1 Minute, 57 Seconds 39 39 39s
      Ryzen 9 3950X: 1 Minute, 30 Seconds 30 30 30s
      It looks like the Z490 aorus master by default runs a pre OC on the 10900k boosting to 200w for unlimited amounts of time. Here is the deep dive by gamers nexus (they have a video and a written article). I read your original review as well and maybe the 125w average figure is there since many workloads are single threaded or don't hit all cores to the limit, 125w should be the average after a blender run (worst case scenario).

      I really like the hard work with all these benchmarks,

      https://www.gamersnexus.net/guides/3...for-your-build

      Comment


      • #53
        Originally posted by birdie View Post

        https://en.wikichip.org/wiki/intel/m...res/comet_lake
        ICC -march=skylake -mtune=skylake
        GCC -march=skylake -mtune=skylake
        LLVM -march=skylake -mtune=skylake
        Visual Studio /arch:AVX2 /tune:skylake
        Could you stop embarrassing yourself?
        Not sure what this is supposed to be "proof" of, but GCC has march/mtune tuning options for skylake, skylake-avx512, cannonlake, icelake-client, icelake-server and cascadelake.

        Comment


        • #54
          I find it hilarious that Intel fanboys are arguing about TDP when they don't even know what it stands for.

          Here is a hint, it stands for Thermal Design Power. It has nothing to do with power draw, its just a rough indication of how much heat a processor generates. Here is the quote from wikipedia

          The thermal design power (TDP), sometimes called thermal design point, is the maximum amount of heat generated by a computer chip or component (often a CPU, GPU or system on a chip) that the cooling system in a computer is designed to dissipate under any workload.The thermal design power (TDP), sometimes called thermal design point, is the maximum amount of heat generated by a computer chip or component (often a CPU, GPU or system on a chip) that the cooling system in a computer is designed to dissipate under any workload.
          Obviously there is a correlation that generally the more heat a CPU generates its also using more power but its in no way proportional between different CPU's, especially between different companies that happen to manufacture CPU's.

          Also there is no standard for TDP, if you watch videos from Gamers Nexus (i.e. https://www.youtube.com/user/GamersNexus) they go through great detail about how to accurately measure both thermals and power, i.e. the only way to properly measure power draw for a CPU is by directly seeing how much wattage the PSU takes, read (https://www.gamersnexus.net/guides/2...lies?showall=1) for more information.

          Also many motherboards deliberately unlock the CPU by default, so yes its possible to have the latest CPU core run at 125W TDP outside of P1 but then its slower than advertised in many benchmarks.

          Comment


          • #55
            Originally posted by drakonas777 View Post
            For the 3rd time - learn the effing hw basics dude. TDP is correct, AMD does not lie. If you are unhappy how boost algorithm works, you are free to disable it, also change MB default power settings - you will get power you want.
            For the 10th time: I don't have to learn anything when 100% of reviews (including HWiNFO and k10temp on my own PC) on the Internet report that AMD CPUs consume far above their rated whatever you wanna call it no matter how long a test runs. That is with 100% default settings, no overclocking, no PBO, no anything.

            When Intel CPUs power consumption is reported to be above the rated TDP is usually during the PL1 window or due to overclocking. You're so smug in trying to belittle me in saying "You don't understand TDP" while not giving any explanation whatsoever. As for Intel CPUs which consume less than their rated TDP - do you really wanna blame Intel for that? Are you alright, sir? I mean you can take it as a prize for God's sake - your CPU is more frugal than you expected it to be. You didn't suddenly find out that you need a beefier cooler to avoid CPU throttle.

            Straight from the horse's mouth:
            Package Power Tracking (PPT)
            : The power threshold that is allowed to be delivered to the socket.
            • This is 88W for 65W TDP processors, and 142W for 105W TDP processors.
            Some of your counter-arguments are simply insane. I said AMD CPUs are better for massively parallelized tasks like rendering, compiling, scientific calculations and you counter it with ... "There are tons of professional or specialized software which scales well". What?? Are you agreeing while disagreeing?

            Still 99% of people out there are interested in single-threaded performance and nothing else that's why certain ARM SoC's have a single very fast core, several slower but fast cores and several more very power efficient cores. It's not just BIG-little, it's now, VERY BIG, BIG, and little.

            Again, Phoronix is primarily for average desktop users who will benefit from faster but fewer cores a lot more than from slower than more cores any time of the day. And if you're so interested in SMP performance, go buy a 64 core ARM CPU because according to your logic that's a better value proposition than Ryzen 3950X with just 16 cores. And if SMP is all that matters why is it that AMD reports IPC increases "above competition" on their slides each time they release a new uArch? Why they don't just add more cores and call it a day? Makes no sense according to you.

            Sorry gonna skip the rest of your comment because there's very little to reply to. Nothing substantial, it's mostly: "I'm smarter than you and you don't understand HW", while I keep posting URLs, information and data. Strangely you agree with me sometimes - I must be an insane Intel fanboy (though I have an AMD based PC right now: Ryzen 7 3700X + Radeon RX 5600 XT - but then some mad AMD fan was claiming that I'm lying - in a previous similar thread I posted the receipts of my purchase but then the guy disappeared ... out of shame I presume :-) ).

            Comment


            • #56
              Originally posted by birdie View Post
              Still 99% of people out there are interested in single-threaded performance and nothing else..
              So who is Intel targeting with the core count increase in their desktop CPUs the last couple of years? The remaining 1% of consumers?

              Comment


              • #57
                Originally posted by birdie View Post

                For the 10th time: I don't have to learn anything when 100% of reviews (including HWiNFO and k10temp on my own PC) on the Internet report that AMD CPUs consume far above their rated whatever you wanna call it no matter how long a test runs. That is with 100% default settings, no overclocking, no PBO, no anything.

                When Intel CPUs power consumption is reported to be above the rated TDP is usually during the PL1 window or due to overclocking. You're so smug in trying to belittle me in saying "You don't understand TDP" while not giving any explanation whatsoever. As for Intel CPUs which consume less than their rated TDP - do you really wanna blame Intel for that? Are you alright, sir? I mean you can take it as a prize for God's sake - your CPU is more frugal than you expected it to be. You didn't suddenly find out that you need a beefier cooler to avoid CPU throttle.

                Straight from the horse's mouth:


                Some of your counter-arguments are simply insane. I said AMD CPUs are better for massively parallelized tasks like rendering, compiling, scientific calculations and you counter it with ... "There are tons of professional or specialized software which scales well". What?? Are you agreeing while disagreeing?

                Still 99% of people out there are interested in single-threaded performance and nothing else that's why certain ARM SoC's have a single very fast core, several slower but fast cores and several more very power efficient cores. It's not just BIG-little, it's now, VERY BIG, BIG, and little.

                Again, Phoronix is primarily for average desktop users who will benefit from faster but fewer cores a lot more than from slower than more cores any time of the day. And if you're so interested in SMP performance, go buy a 64 core ARM CPU because according to your logic that's a better value proposition than Ryzen 3950X with just 16 cores. And if SMP is all that matters why is it that AMD reports IPC increases "above competition" on their slides each time they release a new uArch? Why they don't just add more cores and call it a day? Makes no sense according to you.

                Sorry gonna skip the rest of your comment because there's very little to reply to. Nothing substantial, it's mostly: "I'm smarter than you and you don't understand HW", while I keep posting URLs, information and data. Strangely you agree with me sometimes - I must be an insane Intel fanboy (though I have an AMD based PC right now: Ryzen 7 3700X + Radeon RX 5600 XT - but then some mad AMD fan was claiming that I'm lying - in a previous similar thread I posted the receipts of my purchase but then the guy disappeared ... out of shame I presume :-) ).
                Well, TDP is not TBP/PPT. That's kind of entire my point

                You said that AMD lied. They did't. Did AMD say PPT is 65W? No. They effectively said "you have to use at least 65W cooling solution to have base clocks and perhaps some boost". Do you feel the difference? You would have to put exactly 65W cooling solution (and I mean precisely 65W, like lab equipment grade), stress CPU, and if CPU still consumes 80+W, which eventually leads it to throttling below base clock, then, and only then you could make the case that AMD lied. 65W would be not enough in this case, and declared TDP parameter would be wrong. You can make an argument about AMD power consumption/management being weird/misleading whatever, just don't use term TDP there, because it makes you wrong from technical point of view alone.

                As for fewer/faster vs more/slower I've said already - depends. There is no universal rule here, how much more and how much weaker is the key. If we are talking ARM vs x86 or atom vs skylake, then maybe we could generalize like that in the context of desktop. However, ZEN1/2 is not that much weaker (actually ZEN2 core itself is arguably even stronger, but whatever) so you can't make this generalization in this case. You just can't, because it's not true for 100% of the cases. Dude, I've never said that IPC/ST is not important. It is, obviously. What I am saying is that there are plenty practical cases, where CPU with more slower cores is valid choice, if those cores are not radically weaker. That's all. Your position is like "most of the users can't use a lot of cores effectively, ergo AMD is shit"

                As for everything else - fair enough. Maybe I've overreacted here and there.

                Comment


                • #58
                  Originally posted by birdie View Post
                  The fact is Intel does not cheat/lie about its TDP and AMD does lie about its TDP
                  Get it: that's wrong. Really most motherboards don't follow intel's marketing specs. Instead they're doing turboing all the time, with no downclocking after spec'ed "maximum boost time". So the 10900K for example consumes 250W under load all the time, nicely heating your room.



                  Comment


                  • #59
                    Intel's true performance is 7% of those results. Otherwise it's vulnerability hell. birdie - mental illness confirmed.

                    Comment


                    • #60
                      Awesome article.
                      Although for completeness it would be better to test all CPUs with PPT and both PL1/PL2 fixed at 65/95/125W (so no wattage spikes above this level).
                      Thus it could be possible to compare power efficiency of all CPUs and performance scaling at different power targets.
                      Because not everyone OC their systems or have AIO cooling in their systems (there's SFF builds at least).

                      Comment

                      Working...
                      X