Announcement

Collapse
No announcement yet.

DDR5-6000 Memory Performance On Linux, Scaling From DDR5 3000 to 6000 MT/s

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Shevchen
    replied
    Originally posted by mppix View Post

    .. memory controller not keeping up ..?

    Assuming this is XMP, my very wild guess would be that the bios forcing a configuration to prevent an increase of error rates..
    On boot-up, there is a standard memory training. If a setting doesn't POST, the next config is tried out. If you OC, you either POST with your OC settings or get a (very slow) fallback config so that you get back into the BIOS again to try another setting. On the verge of stability, you might POST, but get errors.

    Memory OC is its own rabbit hole and I mostly do it on Windows due to a lot of OC tools being available to test stability. Once its stable, I use the setting on Linux. There are entire forums on how to properly OC memory so I leave it at that.

    Long story short: On Ryzen you can boost gaming performance (esp. 1% low performance) by tweaking the timings. While most benchmarks go on average framerate, 1% low performance is imho more important. Stuttering kills the experience, even if your average framerate is "good". The default "gucci" setting on DDR4 is 3200 C14 with tuned subtimings with the help of the Ryzen DRAM-calculator. I'm using 3800-C14 with tuned subs, but my previous commenters are correct: You quickly hit a point of diminishing returns, where the price of such RAM doesn't justify the performance gained in "just gaming workloads". If you do professional work, this *might* be worthwhile, but there you want to have stability - so a bleeding edge RAM setting is a big no-no there. As of such, its just a gimmick for enthusiasts, who want to see how far technology can be pushed. (and have the pocket money for such a hobby)

    Leave a comment:


  • mppix
    replied
    Originally posted by andreano View Post

    I'm not asking about that kind of sweet spot – a point of diminishing returns due to maxing out something that's not keeping up. That was my expectation. I'm asking about datapoints that don't fit the curve: How on earth was the next fastest memory always faster than the fastest memory in every compilation benchmark?
    .. memory controller not keeping up ..?

    Assuming this is XMP, my very wild guess would be that the bios forcing a configuration to prevent an increase of error rates..

    Leave a comment:


  • brucethemoose
    replied
    Originally posted by andreano View Post

    I'm not asking about that kind of sweet spot – a point of diminishing returns due to maxing out something that's not keeping up. That was my expectation. I'm asking about datapoints that don't fit the curve: How on earth was the next fastest memory always faster than the fastest memory in every compilation benchmark?
    Probably some platform quirk. Maybe the IMC speed some multiple of the memory speed at the 2nd highest setting, or maybe some BIOS setting is automatically switched at the highest speed?


    With all due respect, OC tuning in the BIOS is not Phoronix's area of expertise... not that I know anything about it either.
    Last edited by brucethemoose; 05 March 2022, 05:13 PM.

    Leave a comment:


  • andreano
    replied
    Originally posted by mppix View Post

    Bigger is better especially with today's high-core count designs.
    The sweet spot just results from the memory controller (or wallet) not keeping up
    I'm not asking about that kind of sweet spot – a point of diminishing returns due to maxing out something that's not keeping up. That was my expectation. I'm asking about datapoints that don't fit the curve: How on earth was the next fastest memory always faster than the fastest memory in every compilation benchmark?

    Leave a comment:


  • brucethemoose
    replied
    Originally posted by V1tol View Post
    I wonder when AMD and Intel will realize that with current amount of cores it is stupid to make only 2 memory channels? Make at least 3, or better 4. People can buy low-capacity slow memory and gain performance and capacity at an acceptable price.
    I bet the gains are less than you'd think in most apps, and that lots of the scaling you see here comes from reduced access times (which more channels don't get you). You're also paying far more the extra pins on the socket, traces in the mobo, and extra space on the die than you would save from using slower memory.

    Leave a comment:


  • TNZfr
    replied
    It's a good summary ... but multiple memory channel are available on server processors ... it's just not the same price

    Leave a comment:


  • V1tol
    replied
    I wonder when AMD and Intel will realize that with current amount of cores it is stupid to make only 2 memory channels? Make at least 3, or better 4. People can buy low-capacity slow memory and gain performance and capacity at an acceptable price.

    Leave a comment:


  • mppix
    replied
    Originally posted by andreano View Post
    I expected to see a point of diminishing returns, but that 6000 MT/s consistently lost out to 5600 MT/s in the compilation benchmarks … was interesting! Are there some sweet and bitter spots on the performance curve? The highest speed is clearly worse for those workloads.

    What could this be? Latency?
    Bigger is better especially with today's high-core count designs.
    The sweet spot just results from the memory controller (or wallet) not keeping up

    Leave a comment:


  • brucethemoose
    replied
    Originally posted by Jabberwocky View Post
    My GF is still using DDR3-1333 for [email protected] competitive gaming. It still performs well for meny titles.
    Yeah, many titles are just not that demanding, and I don't mind turning down the settings in ones that are. The only things really making me want a DDR5 platform are early-access or heavily modded sandbox games, like Dyson Sphere Program or Rimworld.


    In fact, I wish phoronix benched those... but most don't have a good deterministic, automatable benchmark. I guess you spin up a Minecraft server and bench worldgen?
    Last edited by brucethemoose; 04 March 2022, 08:46 PM.

    Leave a comment:


  • creative
    replied
    Originally posted by Jabberwocky View Post
    My GF is still using DDR3-1333 for [email protected] competitive gaming. It still performs well for meny titles.
    Depending on what ones favorite games are especially if you mostly like older stuff, it really does not take much of a system, most classics can be ran very well with that type of setup, all the half-life's, Dark Souls series, lots of stuff will run well. Then there is the realm of Doom and Doom II custom .wad's and total conversions which are an entire universe of their own and can be incredibly unique and innovative with stuff like zdoom/gzdoom. Then of coarse you have all your isometric rpg's much of them an absolute piece of cake.

    I have a beefier modern system thats a bit overkill but I really like the 1440p experience but I am by far not an elitist when it comes to gaming, I will go all the back to my playstation 2 from time to time and even play some playstation 1 titles.

    The reason I have kept a modern system as well is for rare possible new releases like Scorn which is a seriously bizarre looking Hans Ruedi Giger/Zdzislaw Beksinski in motion surrealist and macabre first person crazy madman game.
    Last edited by creative; 04 March 2022, 06:43 PM.

    Leave a comment:

Working...
X