Announcement

Collapse
No announcement yet.

AMD Ryzen DDR4 Memory Scaling Tests On Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    Originally posted by efikkan View Post
    Ryzen is a more superscalar CPU than Intel's architecture, but lacks a decent prefetcher, which makes it suffer in a number of workloads including gaming.
    But the gaming performance is mostly related to the RAM bottleneck so the less data you transfer/the less you switch workloads between complexes the faster Ryzen will be and I'm sure this will be considered in future software and perhaps governor tweaks.

    When you look at the pure IPC Ryzen shows mostly the same in single threading performance like Intel while it's sometimes faster than a 6950X in multi threading scenarios.

    So with software and components that take account of the downsides of the architecture the potential is bigger than those of comparable Intel products, even in games when you compare processors by their price.

    Edit: For those who are curious: B350- Boards can also support ECC when it's enabled by the manufacturer:
    Code:
    [  627.670696] mce: [Hardware Error]: Machine check events logged
    [  627.670712] [Hardware Error]: Corrected error, no action required.
    [  627.670717] [Hardware Error]: CPU:0 (17:1:1) MC16_STATUS[-|CE|MiscV|-|AddrV|-|-|SyndV|-|CECC]: 0x9c2040000000011b
    [  627.670722] [Hardware Error]: Error Addr: 0x00000003b3a002c0
    [  627.670724] [Hardware Error]: IPID: 0x0000009600150f00, Syndrome: 0x0000bf410a401a03
    [  627.670727] [Hardware Error]: Unified Memory Controller Extended Error Code: 0
    [  627.670728] [Hardware Error]: Unified Memory Controller Error: DRAM ECC error.
    [  627.670741] EDAC MC0: 1 CE on unknown memory (csrow:3 channel:1 page:0x787400 offset:0x5c0 grain:0 syndrome:0xbf41)
    Last edited by oooverclocker; 31 March 2017, 12:49 PM.

    Comment


    • #62
      Originally posted by efikkan View Post
      That makes no sense what so ever.
      Just look up AIDA64 and SiSoftware Sandra in benchmarks, and you'll see X99 can deliver roughly double memory bandwidth. With the recent DRAM price increases, quad memory channels will give you great memory bandwidth without buying very expensive memory. It's also great for workstations with ECC, since ECC memory don't support XMP.


      Ryzen 7 1800X is only faster in certain benchmarks, if these aren't your primary workloads, then i7-6800K will offer much better overall performance. You'll have to do a lot of rendering in Blender etc. to justify choosing Ryzen 7 1800X over i7-6800K.


      What?
      The only thing that matters is real world performance, and that's where Intel excel.
      Ryzen is a more superscalar CPU than Intel's architecture, but lacks a decent prefetcher, which makes it suffer in a number of workloads including gaming. Having more computational power doesn't matter if the front-end of the CPU can't feed it properly. That's why you see all the Intel CPUs are fast enough for gaming with turbo at ~4 GHz, and Ryzen lagging behind. Just as with Bulldozer, it's not just a matter of waiting for the software to "optimize" for it.
      That was AIDA64 benchmark what i wrote and what G3D presented (again, read the review). SiSoftSandra is by far the worst benchmark suite i ever installed on any machine, most reviewers removed it from "standard benchmark" for that reason.

      Ryzen7 1700X is faster in most benchmarks, if code is proper, it will be feaster ine very benchmark, it is faster CPU, there's really not much to say about it.

      Game performance can be looked from different sides..., none of them is objective, because "game tests" arrent objective either. One can argue that FPS is all that matters, but that's as far from objective as it could be, another could argue that latency is all that matters, but that is also as far from objective as it could be. It's because games do rely on tons of factors, CPU, GPU in some cases RAM, game engine, driver optimizations and list goes on..., you don't need to be a rocket scientist or driver programmer to understand that.

      In general, as for reviews and "real world" benchmarks, it gives you just "general picture" of what to expect, it is not a gossip, as an example, way back in the days, reviewers praised 8800GT (latter 9800GT) GPU from nvidia, it was GPU "miles ahead" of any other GPU at the time, when you look at benchmarks, it clearly was. Being naive, i bought into it, and got that GPU, "upgraded" from good old AGP 9600XT, and indeed, FPS was like tripple or more (depending on scenario), games that worked quite fine at ~20 to 30FPS, now work at 100FPS no problem with higher details and resolution even, AA have even less impact, all good stuff, good upgrade? Wrong. Thee was no scenario to achieve smootheness of the game even at 100+FPS as it was on 30FPS on good old 9600XT, stutters all over the place where they never existed, lag (input latency) simply skyrocketed, i ahd to push 160FPS toa chieve simillar (still higher) amount of lag i had with 30FPS on 9600XT, and worse smoothness..., experience = terrible. So, all who praised that GPU was wrong? No, their use case scenario was different, for them, that was "best GPU ever" for me, to this day, it stays "the worst GPU i ever had". And again, you don't need to be a rocket scientist to see why that happened, nVidia at the time focused on shader performance, GPU clock was at 600MHz while shaders was at 1500MHz (or something), that is called cheat, real GPU power was not bad, but due to the software in combination with this type of hardware, it was terrible. Personally, i always had lag (input latency) on nvidia GPU's (even back in FX5700, only decent GPU was GF4 MX440) that doesn't mean those are bad GPU's, it just means they do not fit my use-case, and can't satisfy my needs.

      Similar story goes to Linux, WINE+gallium nine = better latency than Windows 7 (last Windows OS i used), Windows XP had good latency, after "upgrade" to Windows 7 (skipped Vista), tons of problems with it, Linux have even better latency than XP, but most of it comes from drivers,a nd we can assume same is the case with XP>7 transfer, but no option to drop it...

      Bottom line is, use whatever you want and enjoy it..., but don't for a moment claim something that is subjective (untrue) to be an objective fact, Ryzen 7 1700+ is faster CPU than 6800k, period, that is objective claim. "Real world" changes, and even in "real world" accordint to the most "benchmarks" online that is objective fact.

      For how flawed testing methodologies could be (and yes, i did wonder why no one tests Ryzen with AMD GPU also at the time...):
      When a "CPU bottleneck" is something quite different.♥ Subscribe To AdoredTV - http://bit.ly/1J7020P ► Support AdoredTV through Patreon https://www.patreon.c...

      Comment


      • #63
        Originally posted by khnazile View Post
        Could someone make tests of Ryzen with 64 GBytes of RAM at different clockspeeds? There were some slides showing that the memory clock is limited to 1866MHz with 4 dual-ranked modules. It would be very disappointing if that's true.
        See here: http://www.legitreviews.com/amd-ryze...ormance_192960

        Comment


        • #64
          Originally posted by Luke_Wolf View Post
          But not quite. In real world performance, Ryzen is more than Good Enough(TM) for gaming, just as those Intel parts are Good Enough(TM) for gaming.
          There are a number of games where Ryzen loses 10-15% performance, and some 20%. If you're building a machine with a GTX 1070 or higher, this will make the GPU a waste of money.
          If you're going to game on a RX 480 though, it matters less of course.

          Originally posted by Luke_Wolf View Post
          Does the Octocore Ryzen run behind the Quad Core Intel parts in terms of sheer FPS? oh absolutely in a lot of cases
          i5-7600K, i7-7700K, i7-6800K, i7-6900K and i7-6950X all perform close to the same in gaming, while Ryzen lags far behind. That's because all these intel processors are fast enough to not be a bottleneck, while the Ryzens are not.

          Originally posted by Luke_Wolf View Post
          <cut>...
          Ryzen performance isn't bad in any sense in the real world, it's just in the current state of software, it's not as fast at shoving out frames compared to the Intel product.
          Well, that's what performance is right? And a Toyota is just not as fast as a Ferrari, right? So when Ryzen fails to deliver, performance stops to matter for AMD fans, interestingly…

          Originally posted by oooverclocker View Post
          But the gaming performance is mostly related to the RAM bottleneck so the less data you transfer/the less you switch workloads between complexes the faster Ryzen will be and I'm sure this will be considered in future software and perhaps governor tweaks.
          Memory bandwidth have little to no effect on gaming performance.
          Rendering threads are usually bottlenecked by cache misses and branch mispredicitons, and that's why Intel does so much better.

          Originally posted by leipero View Post
          Ryzen7 1700X is faster in most benchmarks, if code is proper, it will be feaster ine very benchmark, it is faster CPU, there's really not much to say about it.
          The same argument was used for Bulldozer. But real performance matters, not theoretical "future" use cases, and there are only a few cases where Ryzen excells, and it's not the most common use cases.

          Most programs are not cache optimized and not superscalar, and that's unfortunately not going to change anytime soon. Granted things like compression, video encoding, and software rendering scale very well on both, but of course give Ryzen an edge because more computational resources. But most software is not written this way, which really annoys all of us that's interested in low level optimization. The reason why most software aren't designed this way is the culture of abstraction and bloat which has existed since the mid 90s. How OOP is used is one of the largest factors in performance, in games, web browsers, and various other common desktop applications. More bloat results in something way worse than more instructions; cache misses and branch misprediction, both of which causes very costly CPU stalls. When cache misses occur, even bumping the clock wouldn't matter, as the penalty is a time constant. Having a better prefetcher (like Intel does) helps mitigate the problem and keep the computation units fed, but of course just to some extent. No prefetcher can make bad code good, but it helps enough to give Intel quite a few percent improvement in most software, and even a lot in some applications like Photoshop where a quad-core from Intel beats an octa-core from AMD. Software like this is not going to be rewritten next year, so for the next years Zen is not going to shine. Let's hope Zen+ prioritizes a massive improvement in the prefetcher, because if they did, the IPC would surpass Intel by quite a few percent.

          Originally posted by leipero View Post
          Game performance can be looked from different sides..., none of them is objective, because "game tests" arrent objective either. One can argue that FPS is all that matters, but that's as far from objective as it could be, another could argue that latency is all that matters, but that is also as far from objective as it could be. It's because games do rely on tons of factors, CPU, GPU in some cases RAM, game engine, driver optimizations and list goes on..., you don't need to be a rocket scientist or driver programmer to understand that.
          Of course latency matters, but this is still not something Ryzen does better.
          But still latency are measured 100% objectively.

          Comment


          • #65
            Originally posted by efikkan View Post
            There are a number of games where Ryzen loses 10-15% performance, and some 20%. If you're building a machine with a GTX 1070 or higher, this will make the GPU a waste of money.
            If you're going to game on a RX 480 though, it matters less of course.
            On the contrary, when we're talking real world performance, which as you said is all that matters, there's a 0% performance difference between them because both run in excess of 60FPS. There aren't games you can't play or can't play as well if you've got a Ryzen instead of a Kaby Lake.

            Originally posted by efikkan View Post
            i5-7600K, i7-7700K, i7-6800K, i7-6900K and i7-6950X all perform close to the same in gaming, while Ryzen lags far behind. That's because all these intel processors are fast enough to not be a bottleneck, while the Ryzens are not.
            Okay now that's just bullshit in the vast majority of reviews, the hex core and the octocore intel parts lag behind because of lower clockspeeds vs their quad core counterparts, and with few exceptions the Ryzen Octocore is competitive with the Intel Octocore.

            Originally posted by efikkan View Post
            Well, that's what performance is right? And a Toyota is just not as fast as a Ferrari, right? So when Ryzen fails to deliver, performance stops to matter for AMD fans, interestingly…
            uh uh uh... There's a difference between performance which considers both theoretical and Real World, and just Real World Performance which you in your last post said was all that matters. In real world performance your own analogy defeats you, you know why? Because while that Ferrari can theoretically be faster, there's this thing called speed limits. Speed limits make it so that everyone can only go so fast, and as a result the Ferrari performs exactly the same as the Toyota in the Real World.

            Computers have their own speed limits when it comes to frames, and that is the refresh rate of the display, and in the Real World a computer spitting out 1000 FPS is performing just the same as a computer spitting out 100FPS

            Now I'm sure you're going to go "But but... more frames of excess of 60 translates to better future performance" and I'm going to stop you right there. Game Performance isn't translational. How a particular game performs only tells you how that game performs, it says nothing about any other games, and especially not about games of the future. Games are real world, not synthetic benchmarks

            Comment


            • #66
              Originally posted by Luke_Wolf View Post
              Absolutely Correct,


              But not quite. In real world performance, Ryzen is more than Good Enough(TM) for gaming, just as those Intel parts are Good Enough(TM) for gaming. Does the Octocore Ryzen run behind the Quad Core Intel parts in terms of sheer FPS? oh absolutely in a lot of cases, but in real world performance does that mean anything at all? Nope. Because after 60FPS (and in some special cases up to 144FPS if you've happen to got a lot of money to throw at displays) framerate for real world performance loses all meaning and other utilization figures become much more useful, such as... How heavily loaded are the cores? If they're running near maxed out, you're going to get microstutters in terms of that Real World Performance(TM). Ryzen performance isn't bad in any sense in the real world, it's just in the current state of software, it's not as fast at shoving out frames compared to the Intel product.
              As a Ryzen user here, I can say a few things.

              I have a Ryzen x1800 oc to 4.2Ghz and using ddr4 @ 3.1ghz. I also have a nvidia gtx 780 3GB edition. I upgraded because my Phenom 955 @ 3.4ghz was showing its age. It was well over due to upgrade anyway.

              First case: Mass Effect: Andromeda.
              In task manager, the game seems to be using 6 cores. Now don't get me wrong, this might just be a co-incidence. But the game was going from very smooth to very slow chug fest on my old phenom CPU. In task manager, I noticed all 4 cores were maxed out... So I knew straight away that the cpu was a very likely limitation. Once I upgraeded to Ryzen. Even before overclocking, I noticed Andromeda ran silky smooth, even on ultra settings! But then I also noticed that although the game was using six cores, the cores were not going flat tack, Maybe the odd spike to 80% but basically there was a lot of headroom. So now the GPU is the limiting factor...

              For any quad core user, even with hyper threading... If games continue in this direction, using more than 4 cores for eg... The Kaby lake is going to get old very quickly. Fair enough single thread speed obliterates my CPU. But real world tests seem to prove otherwise. My friend who has a 4th Gen intel i7 says it's time to upgrade because Andromeda ran too slow on his machine. He has a similar spec'd GPU too... He said it's too slow even on lowest graphics settings. Now don't get me wrong, his GPU might actually be the cause here since he's running an AMD gpu, but still... Very surprising considering my own notes on threading performance.

              Second case: Passmark v9
              I would like to state is that in Passmark Direct X12 tests I had a huge improvement in speed. My phenom was getting 5 to 6fps in the dx12 test. But my Ryzen hits 75 to 80fps... So tell me ... How the hell does a Ryzen perform more than 10x faster than my old quad core phenom cpu? Sure it's a synthetic test but results speak for themselves. According to the test, it obliterates a kaby lake. I'm guessing if I was using a 1080 gtx Ti edition, that score would have been in the hundreds of fps... Going 8 core to me is to me, a wiser choice and definitely a better long term solution. Especially for gaming. Oh and the Ryzen has hypertheading too...

              Comment


              • #67
                Originally posted by b15hop View Post

                As a Ryzen user here, I can say a few things.

                I have a Ryzen x1800 oc to 4.2Ghz and using ddr4 @ 3.1ghz. I also have a nvidia gtx 780 3GB edition. I upgraded because my Phenom 955 @ 3.4ghz was showing its age. It was well over due to upgrade anyway.
                What cooling are you using to get to 4.2ghz?

                That's the highest OC I've heard of that isn't using liquid nitrogen, everyone else seems to hit a limit at 4.1ghz or less.


                Comment


                • #68
                  Originally posted by Herem View Post
                  What cooling are you using to get to 4.2ghz?

                  That's the highest OC I've heard of that isn't using liquid nitrogen, everyone else seems to hit a limit at 4.1ghz or less.

                  Custom water cooling. 4 x120mm fans on a 240mm radiator. Two fans push, two fans pull. My temps go up and down like a yoyo so it's not exactly stable. But then I do believe I didn't seat my CPU block very well. I might have used a bit too much thermal paste.

                  Personally It's more stable on stock speeds and my CPU isn't the limit of performance right now. The real performance issue I have at the moment is my GPU and lack of SSD space.

                  Comment


                  • #69
                    Originally posted by bridgman View Post

                    Yep, I have had good experiences with Asus in the past as well, but I'm probably an easier customer than some because I haven't had time to do anything other than plug the parts in and run them at stock speeds.
                    Personally, my Asus crosshair IV formula was a WAAAAY better board than the Crosshair Hero VI. To me the Hero is a rushed board. It's taken this long just for their bios to be stable. A few of their bios releases have been really terrible. Including one that people had which was bricking their motherboard. I think AMD should have released these cpu's last year to iron out bugs sooner.

                    Comment

                    Working...
                    X