Announcement

Collapse
No announcement yet.

Blender 2.79 Performance On Various Intel/AMD CPUs From Ryzen To EPYC

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by torsionbar28 View Post
    ^ The correct answer for just about every benchmark out there. The Ryzen 1700 is a fantastic value.
    The 1600 is actually the best-value CPU, at least if you overclock it. It's even the best value if you exclude the heatsink it comes with. But, the 1700 is still very good.

    Comment


    • #12
      Nice thing is that new blender git version is supporting cpu+gpu (cuda, i think opencl got that feature long ago, i dont have AMD gpus to test it) rendering. So TR system can be even more powerfull in production work.
      Atleast i today found that setting next to gpu in CUDA tab.

      Comment


      • #13
        Michael and TYAN: When will we see dual AMD EPYC (7601)?

        BTW My Xeon X3470 (Nehalem), 2.93 GHz, 4C/8T, chipset series: 3420, 24 GB, tri channel DDR3-800 ECC, openSUSE Tumbleweed, amd-staging-drm-next, Blender 2.79 need 334 secs for BMW27.
        Last edited by nuetzel; 26 November 2017, 07:04 PM.

        Comment


        • #14
          Originally posted by nuetzel View Post
          Michael and TYAN: When will we see dual AMD EPYC (7601)?
          So far I have no dual EPYC motherboard and no insight yet on when that may change.
          Michael Larabel
          https://www.michaellarabel.com/

          Comment


          • #15
            One of the best articles you have ever done, excellent selection of processors (you have access to some impressive hardware) and I love the fact that you tested a number of scenes.

            For the sake of comparison, I will post my own results with a Ryzen 5 1600 and 8Gb DDR4 2400: 370 seconds.

            A word about this processor, I was reluctant to build a new system because I thought that next year Intel was going to release main stream processors with AVX-512 and I also believed that AMD would counter by bringing a 10C/12T or 12C/24T processor into the main stream at possibly close to the $300 mark. Not to mention I had/have a Xeon 1241 v3, which has a base clock of 3.5Ghz, a turbo clock of 3.9Ghz and the motherboard I was using had that performance feature that would run the cpu at the single core turbo clock all the time, so effectively it behaved as a 3.9Ghz processor, just 1ooMhz slower than the 4790K. Add in 16Gb DDR3-1600 and it made for a fast system.

            I then saw the leaked slides indicating that Intel was releasing refreshes next year, but the rumored Cannonlake may not be here next year and considering how expensive Intel motherboards that support Coffee Lake are, I thought that the R5 1600 @ the $170 price point I found it at and the $30 I paid for the motherboard after bundle savings was too good a deal to pass up.

            As far as how much of a performance boost one can expect, with a 19 second 1080p y4m source file, encoding to x264 with the very slow preset the Xeon took 135 seconds to finish the encode, the 1600 took 100 seconds. More importantly, with x264 using the ultra fast preset, which doesn't come close to using all the available cores, the Xeon was way faster, finishing in 2.492 seconds compared to the 1600's 6.712 seconds, that's 3 times faster! If we up the workload to x265 using the very slow preset, the Xeon loses by posting an encode time of 10m53.091s compared to the 1600's 8m50.850s showing, 2 minutes faster, not all that impressive if you ask me.

            To the R5 1600's credit, thanks to the DDR4 and the extra cores/threads, it does offer a smoother experience and it does allow me to play back some 4k content smoothly that the Xeon struggled with, but here's sad truth, if anyone is thinking of building a new system. Take an honest assessment of what your requirements are, are you a content producer? Do your workloads take a while to finish? If the answer is yes, then there comes a point where it makes no sense in spending more money to build a faster system.

            19 seconds of 1080p content takes a Ryzen 1600 2 minutes and 15 seconds to encode it to x264 using the very slow preset, which is generally considered the mastering quality preset. Extrapolate that to a full 90 minute feature film and it would take about 12 hours to encode the full movie.

            Bottom line, until we have processors that can encode delivery content with the mastering quality aggressive settings, there's no point in going crazy and spending a pile of cash to buy the top of the line processor, just buy the best bargain you can find and call it a day.

            Thanks for the review Michael, nice job on this one.

            Comment


            • #16
              Originally posted by schmidtbag View Post
              The 1600 is actually the best-value CPU, at least if you overclock it. It's even the best value if you exclude the heatsink it comes with. But, the 1700 is still very good.
              I would agree, I just caved and picked one up for $170 plus a $30, after bundle savings, motherboard with 8Gb DDR4-2400 and even though it's not all that much faster than the Xeon 1241 v3 clocked at 3.9ghz it replaced, it is smoother.

              Comment


              • #17
                Originally posted by nuetzel View Post
                [USER="1"]
                BTW My Xeon X3470 (Nehalem), 2.93 GHz, 4C/8T, chipset series: 3420, 24 GB, tri channel DDR3-800 ECC, openSUSE Tumbleweed, amd-staging-drm-next, Blender 2.79 need 334 secs for BMW27.
                There is no way that a single Nehalem based Xeon is capable of smoking a Xeon Silver 4108 and matching a 6C12T 3.4Ghz 6800K. Perhaps you meant that you have a dual Xeon X3470 setup?

                Comment


                • #18
                  These graphs make it look like Intel offers the best performance on a single chip...

                  Comment


                  • #19
                    Originally posted by nomadewolf View Post
                    These graphs make it look like Intel offers the best performance on a single chip...
                    Do you mean "a single server" ? Or are you treating Epyc as multiple chips ?
                    Test signature

                    Comment


                    • #20
                      Originally posted by nomadewolf View Post
                      These graphs make it look like Intel offers the best performance on a single chip...
                      Take another look, the fastest setup tested is a dual Xeon Gold 6138 ($2600 per cpu), the fastest single chip tested is EPYC 7601 ($4900).

                      The reality is that if money is no object, an Intel based setup is nearly always the better value when you take into account the cost of the processors, how much faster than the competition they are and the superior power consumption. AMD offerings are, and always have been, the option you choose if you wish to save as much money on hardware as possible without getting maximum performance, it depends on what you value more.

                      I personally wish AMD would just abandon the x86 market and instead start making and selling either ARM or RISC-V based processors. From a business perspective, one of the dumbest things you can do is try and compete in a market by selling products based on a competitors technology. Home field advantage is a big thing, the competition between AMD and Intel is like the competition between the New England Patriots and the NY Jets; sure the Jets may win a game or 2 against that Pats, once in a while the Jets even manage to go into NE during the playoffs and beat them but much more often the Pats simply manhandle everyone they play.

                      AMD is trying to compete in a market that was created by Intel and which the rights to the basic underlying technology is owned by Intel. Sure AMD and Intel have a cross licensing agreement, but with the exception of the x86-64 extensions, AMD has always played "follow the leader", Intel came out with SSE, AMD tried to counter with their own SIMD instructions, how many people even remember the name of them? AMD was forced to adopt SSE, Intel came out with SSE2/3/4, AMD was forced to follow suit. Intel came out with AVX/2, AMD had to follow. Now Intel has AVX-512, AMD will have no choice but to do the same. What this has the net effect of doing is that AMD validates Intel's technology as the superior solution, Intel doesn't adopt AMD's solutions (with the exception of x86-64 and the newly signed deal regarding AMD iGPU's) but AMD adopts Intel's.

                      If I were running AMD, I would make a play to partner with either RedHat, Suse, Canonical, or maybe buy the rights to TrueOS and go the route of Apple from 20 years ago, namely bring to the desktop a Unix/Linux based OS running on high core count cpu's based on the ARMv8 architecture or RISC-V. If you will recall Apple had managed to put 5 billion in the bank with their PPC architecture and their own custom OS (later BSD based) before making all that cash in the ipad/iphone/ipod markets. I think AMD could replicate that success if they just changed their thinking a bit.

                      Comment

                      Working...
                      X