Announcement

Collapse
No announcement yet.

Ryzen 9 3900X/3950X vs. Core i9 10900K In 380+ Benchmarks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Is an Intel fanboy actually trying to argue that Intel's chips are better at power use?

    Wow, somebody has lost their mind.

    Stick to arguing that single-threaded performance is all that matters if you want anyone to take you seriously.

    Comment


    • #42
      Michael

      Page 4 typo... (2nd edit required) It's actually the 3900X that is 7.6% faster, not the 3950X which is 43% faster.

      "Meanwhile for thread-happy ray-tracing workloads with C-Ray, Tachyon, rays1bench, and YafaRay, the Ryzen 9 3950X was 7.6% faster than the i9-10900K."

      Comment


      • #43
        birdie hasn't said anything factually incorrect or misleading

        Comment


        • #44
          Originally posted by birdie View Post
          ...
          You blasted moron, how absurdly dumb are you? Claiming honesty for none other than the lousiest, most anti-user and corrupt company in the semiconductor business, and on ground of what is essentially a cheat nonetheless. They overclock this poor thing to the max to gain irrelevant minor wins, and only for a short duration, because most benchmarks tend to be short, thus their cpu can claim performance it cannot really sustain in real world scenarios.

          And here is how intel CPUs perform at their advertised TDP:

          And bam, a FULL THIRD of the "performance" - gone.

          Zen+ is logically virtually identical to zen on the cpu level, there are only small tune-up firmware changes.

          Zen 2 is a massive redesign of the soc topology, the cores remain largely the same tho, the one significant difference is the improvements to the SIMD units. The other improvement to which AMD owes a lot of the success of zen 2 is the finally up-to-standard clock governor, yet that too is something that's not really part of the core, but a change at soc level.

          If anyone here is an embarrassment, that is you, by a very, very long shot. Alas, you appear to be too lacking in the mental faculties to even realize it. A true, by the book fanboi...

          Comment


          • #45
            Originally posted by ddriver View Post
            You blasted moron, how absurdly dumb are you? Claiming honesty for none other than the lousiest, most anti-user and corrupt company in the semiconductor business, and on ground of what is essentially a cheat nonetheless. They overclock this poor thing to the max to gain irrelevant minor wins, and only for a short duration, because most benchmarks tend to be short, thus their cpu can claim performance it cannot really sustain in real world scenarios.
            Don't waste your breath on that person. You know how blind these fanatics are. Think about all the flat-earthers or the 5G scaredy-cats. They could drown in proof, they would not notice.

            Comment


            • #46
              Originally posted by birdie View Post

              Comet Lake features exactly the same core as Sky Lake from 2015 sans HW vulnerabilities.
              Did some independent source actually verify that, or did intel just ship a new stepping of skylake with latest microcode with all the "dear OS enable all SW workarounds for vulnerabilities" bits enabled by default? My suspicion is that it's mostly the latter and they hoped to get icelake which actually fixed these issues in silicon out sooner, but failed. Can't verify though, no such cpu to test.
              Last edited by mlau; 29 May 2020, 03:47 AM.

              Comment


              • #47
                Remember kids, Zen 3 only just around the corner......




                Comment


                • #48
                  Originally posted by smitty3268 View Post
                  Is an Intel fanboy actually trying to argue that Intel's chips are better at power use?

                  Wow, somebody has lost their mind.

                  Stick to arguing that single-threaded performance is all that matters if you want anyone to take you seriously.
                  Your comments along with all the likes is a clear indication of people losing their minds over blind loyalty and morbid fanboyism.

                  The fact is Intel does not cheat/lie about its TDP and AMD does lie about its TDP. I wonder what peculiar justification you'll find for that. I mean in this message there was nothing substantial or factual.

                  Not gonna comment on any other replies because they are similarly baseless or outright wrong. Enough. People are still arguing that Intel CPUs are bad by default because the company was anticompetitive many years ago. WTF? How does it relate to any benchmarks out there? Enough with this hatred and bigotry.

                  Comment


                  • #49
                    10400F review:

                    Compared to AMD's Ryzen 3600 and 3600X, the 10400F is slightly slower, by 4% and 6% respectively. It depends very much on the workload though, especially tasks that are easy to parallelize, like rendering, are AMD's strongest suit, and Intel has a clear lead in single and low-threaded apps, which are relevant to the majority of consumers today. Performance gains against last generation's Core i5-9400F are impressive because of the added cores and threads; the 10400F enjoys a 15% performance advantage—at similar cost.

                    For gaming, the Core i5-10400F is a clear winner against AMD. It is faster than any AMD CPU at all resolutions—even the Ryzen 9 3900X is beat by 3%. Against Intel's own lineup, the Core i5 does very well too. It trades blows with last generation's Core i7 and Core i9 models. The Core i9-10900K is merely 5% faster.
                    Intel sucks, AMD rules, right. In rendering, massive compute taks and compilation that is.

                    What a horrible CPU lineup full with imaginary HW vulnerabilities and horrible power consumption. Oh, wait, a 14nm Intel CPU consumes less than a comparable 7nm AMD 3600X:

                    Last edited by birdie; 29 May 2020, 05:29 AM.

                    Comment


                    • #50
                      Originally posted by birdie View Post
                      Does AMD adhere to its specs?!

                      AMD Ryzen 7 3700X is rated 65W, consumes 90W no matter how long a test runs.
                      AMD Ryzen 7 3800X is rated 90W, consumes ~120W no matter how long a test runs.
                      Yes it does. TDP != power consumption. Former is a requirement for thermal solution, and latter is a CPU electric parameter. For example, decade or so ago most of Intel CPUs had substantially lower power consumption than TDP (especially in low/mid end). So according to your logic, they were "lying" back then. Nonsense. TDP definition somewhat varies vendor to vendor, but general idea is that TDP defines minimal thermal solution, which guarantees base frequency, but often some sort of opportunistic turbo too. TDP does not define CPU power consumption at specific case, BUT, if you use exactly TDP-valued thermal solution, CPU eventually most likely will drop/reduce all boosts and fallback to ~65W consumption with base~ish frequency, because temperature will reach some near-throttle region and boost/thermo algorithms will act accordingly.

                      In you example a good thermal solution was used, so AMD CPUs had no reason to adjust to "TDP level" of power consumption. This is normal. It's more like a "GPU style" functioning and I'd say it's smarter way of managing environment than Intel does right now. Put poor, precise 65W/95W cooler on them and they will fallback to ~65/95 usage eventually.

                      In summary, TDP != power consumption and most of your TDP-related arguments are invalid, because you clearly do not understand this topic like at all.

                      Originally posted by birdie View Post
                      AMD fanboys have to just shut the f*ck up sometimes
                      No, they don't. They have every right to express an opinion, even though it may be biased.

                      Originally posted by birdie View Post
                      Intel 10900K does adhere to its TDP rating which is exactly 125W
                      Intel has different power/thermals management implementation. You may argue it's better, but it has nothing to do with "adhering". Both AMD and Intel adhere to TDP, but in different ways.

                      Originally posted by birdie View Post
                      Funny how AMD fanboys bury Intel at every turn
                      I don't see it in this forum. TBH you are the only one writing in that kind of "wccftech comments section" style here. I mean the style, which is childish, semi-offensive and technically inaccurate.

                      Originally posted by birdie View Post
                      the 10900K is often faster than 3950X despite the latter having a whopping 60% MOAR cores
                      But not always, which means, that 3950X is still valid choice for someone who can use the cores.

                      Originally posted by birdie View Post
                      And I won't repeat what I've already said on TPU.
                      OK, let's look at it.

                      Originally posted by birdie View Post
                      Single threaded performance is always more preferrable than MOAR cores
                      False generalization. It depends on precise metrics and cases. If, in theory, we are talking like 16 Intel Pineview cores versus 8 skylake cores then sure. However, if we are talking 4 skylake cores versus 6-8 zen cores, then preferable config is dictated by the use case. For example, most of the time 1600X has much better performance than 7600K, even in modern gaming. Also, typical desktop load is a lot different, than say datacenter load. ST is not always preferrable, perhaps often, but not always.

                      Originally posted by birdie View Post
                      Comet Lake, even if it's gen 5 Sky Lake from 2015, is still faster than Zen 2 from 2019 in the resolution most competitive gamers game at, i.e. 1080p
                      Nobody sane is denying that. However, Intel's average advantage is <=~10% while costing more, and not all gamers are in competitive segment seeking max FPS possible. Depending on the budget and requirements, sometimes it's better to invest in GPU instead CPU/MB. For example, choosing R5 3600 + B450 instead i5 10600K + Z490 saves up to ~200USD/EUR, which, invested in GPU, will most likely result in better overall gaming performance. I said sometimes, not always, but considering how many people rely on lower incomes and that high end systems tend to be used for high resolution gaming, I'd say "sometimes" is quite often. Yes, technically Intel still holds the crown for gaming, but it has no significant practical difference in a lot of cases to justify higher price.

                      Though personally I do not care about years and uArchs - that's what fanboys care about . I care about value for the money and my use cases.

                      Originally posted by birdie View Post
                      Yes, PL1 power consumption is quite high, however at least Intel is not deceiving its customers about it
                      As I said, TDP is not the power. It's literally NOT CPU electric parameter,.ergo, no deception here.

                      Originally posted by birdie View Post
                      Much touted AMD's advantage in power consumption/efficiency is kinda insincere to say the least. Last time I checked AMD had sold GloFo which is stuck at 12nm (which is a lot worse than Intel's 14nm) and has been using TSMC for the past several years. Intel on the other hand doesn't have this luxury while they shot too high with their initial 10nm plans whose "benefits" they have been reaping for the past two years starting with a failure called Cannon Lake.
                      Irrelevant. We are comparing products, not business models. AMD products have advantage in power consumption period. How it was done, what business model led to it - a different topic.

                      Originally posted by birdie View Post
                      The game of waiting for AMD fans continues, "Zen 3 is around the corner". "This year/next year/soon AMD will become the indisputable leader in performance". Aren't you tired of it? It's been like that for the past 10 years already if not more.
                      Personally I do not give a rat's ass about it. You buy the best value product for your specific case(s) at the time you need it. That's it. Actually I am not sure if I want AMD to be "pure" leader, because that would most likely mean higher prices ant worse value for the money. Them being a bit behind may be a good thing as permanent "motivator" to innovate and be aggressive.

                      Originally posted by birdie View Post
                      x265 which barely scales beyond 16 cores. Libaom (av1) barely uses more than two. For 99% of people out there 15 cores of your super duper Ryzen 9 3950X are worthless. You can have 1000 slow cores but if you have a task which fully saturates just one core, your additional 999 cores are worth literal crap. And again, absolute most applications run this way.
                      Maybe, but a) that's why fixed function blocks are used (and will be used more in the future) b) 1000 cores is inadequate example and 3950X is not targeted for 99% of people.

                      Originally posted by birdie View Post
                      I'm against crapping on Intel at every turn because they don't have access to an advanced node like AMD does.
                      I really don't understand why you care about it that much. It's true - AMD has access to the better node, which is a major contributing factor to AMD's competitiveness. People express that in the comments. For some reason you take it at personal/emotional level and defend Intel. Another red flag of you being the real fanboy here.

                      Originally posted by birdie View Post
                      Cheaper? CPUs, maybe. Motherboards? Are you effing kidding me? The X570 motherboards have been expensive as hell.
                      Yes, x570 mobos are about the same price as Z490, but B450 ones are cheaper without artificial limitations on OC/RAM speed, like on Intel budget line.

                      Originally posted by birdie View Post
                      Meanwhile AMD does lie about its CPUs TDP. 3700X has 65W written in its specs and it consumes 90W!! No matter the duration of a test!! Likewise with 3800X and 3900X which are all consuming far more than AMD claims.
                      For the 3rd time - learn the effing hw basics dude. TDP is correct, AMD does not lie. If you are unhappy how boost algorithm works, you are free to disable it, also change MB default power settings - you will get power you want.

                      Originally posted by birdie View Post
                      Absolute most modern tasks the end user faces barely scale. Have ever ever written a single line of code? 99% of users out there never run Blender. Users run a web browser and lo and behold, even the task of launching an application from the disk is mostly serial and cannot be really parallelized effectively for fuck's sake.
                      Average users do not buy many-core CPUs. In fact, they even don't buy desktops for the most part. There are tons of professional or specialized software which scales well. As I said earlier, everything depends on the use case. Also, despite the lack of well multi-threaded applications, multicore CPUs still offer benefits in a context of multi-process environment.

                      Originally posted by birdie View Post
                      Stop BS'ing me with "AMD has a super advance uArch". Intel has Ice Lake/Willow Cove which has a ~18% IPC advantage over Sky Lake and it obliterates AMD CPUs
                      AMD has a good core. Not super advanced, also not "bad". Feature wise it's somewhat similar to skylake, but some tradeoffs were made for package scalability. There are cases where more zen cores makes more sense, and there are cases where less Intel cores makes more sense. Stop that generalized BS.

                      Originally posted by birdie View Post
                      Each thread about Intel and AMD and AMD fanboys continue with the same lies or outdated now completely irrelevant facts.
                      You do it too.

                      Originally posted by birdie View Post
                      The fact is Intel does not cheat/lie about its TDP and AMD does lie about its TDP. I wonder what peculiar justification you'll find for that. I mean in this message there was nothing substantial or factual.
                      Fact is you have no effing clue what is TDP. That's for sure

                      Originally posted by birdie View Post
                      People are still arguing that Intel CPUs are bad by default because the company was anticompetitive many years ago
                      No, people are arguing that Intel products still have the worse value. And they are 100% correct.

                      Originally posted by birdie View Post
                      Intel sucks, AMD rules, right. In rendering, massive compute taks and compilation that is.
                      Yet another nonsense of yours. Nobody said this here. Almost nobody (apart wccftech kids perhabs) is making this kind of the argument. People who buy multi/many core actually tend to do massive compute, compilation, render, crypto - whatever.


                      You are one or several of these: a) troll b) Intel fanboy c) noob in PC HW. Either way I have no further interest in your BS post's which hold no technical/logical/factual arguments.

                      PS. My english is shit, but Intel CPU value and your arguments too

                      Comment

                      Working...
                      X