Announcement

Collapse
No announcement yet.

DDR4 vs. DDR5 Memory Performance For Intel Core i5-12600K Alder Lake On Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • sdack
    replied
    DDR4-4400 is performing poorly here. Looks like the BIOS does not make much of a difference between DDR4-4400 and DDR4-3600.

    Leave a comment:


  • Slartifartblast
    replied
    Others have probably said it but it would be nice to see the effect of DDR5 vs DDR4 on iGPU performance.

    Leave a comment:


  • bug77
    replied
    Originally posted by piotrj3 View Post

    I bet one reason for ECC baked in, is because of row hammer attacks. DDR3 was vulnerable to it, DDR4 was sort of not vulnerable due to periodic refreshes except very extreme scenarios, but i think DDR5 without ECC built in would be way more vulnerable to it then DDR4.
    It's not because of that, it's because DDR5 won't hold the data long enough without it.

    Leave a comment:


  • piotrj3
    replied
    Originally posted by chuckula View Post


    The baked-in ECC is to protect against errors that actually occur within an individual chip and really point to the fact that individual memory cells are becoming less reliable with smaller lithography & faster clocks to the point that adding complexity to each DRAM die became necessary.

    Traditional ECC with the extra chip and ECC in the memory controller is still necessary to handle errors that occur during transfer of data back & forth from the CPU, which can't be addressed by the in-chip ECC.
    I bet one reason for ECC baked in, is because of row hammer attacks. DDR3 was vulnerable to it, DDR4 was sort of not vulnerable due to periodic refreshes except very extreme scenarios, but i think DDR5 without ECC built in would be way more vulnerable to it then DDR4.

    Leave a comment:


  • MarkG
    replied
    Originally posted by bug77 View Post
    It's really nothing to worry about, last I checked some stats, as a home user you're only encountering 1 (one!) bit flip per year on average.
    Back in the day, I* brought up a "machine" (150k nodes of 32GB/node DDR4 soldered-down (sockets matter) at sea-level (altitude matters)). There was never a moment where less than a dozen nodes were running my Machine Check handler due to an ECC error. We scheduled the benchies to run over-night because day vs night mattered too.

    So, even at home, I don't build machines without ECC. A few years ago, some class "M" solar flare thing hit Earth and 2 of my 3 machines logged ECC single-bit errors.

    Edit: * with a little help from my friends
    Last edited by MarkG; 23 November 2021, 12:15 PM.

    Leave a comment:


  • schmidtbag
    replied
    Originally posted by chuckula View Post
    I will say that while DDR5 is still expensive and hard to find [just like DDR4 was when it was new... and DDR3 when it was new... etc. etc.] the fact that DDR5 at equal clocks is actually outperforming DDR4 is a good sign for the technology. Of course DDR5 will eventually scale to near or maybe above 10,000 speeds over time, but usually when the lowest-tier of a new DDR standard is compared to the highest-tier of its predecessor the results actually favor the older standard, while you are showing the opposite results here.
    I had to scroll way too far to find this comment. Despite how young it is, clock-per-clock, DDR5 does appear to be an overall improvement.

    I just really want to see iGPU results, as that is where I think it will pack more of a punch.

    Leave a comment:


  • avem
    replied
    • Ryan Smith - Tuesday, July 14, 2020 - link

      So on-die ECC is a bit of a mixed-blessing. To answer the big question in the gallery, on-die ECC is not a replacement for DIMM-wide ECC.

      On-die ECC is to improve the reliability of individual chips. Between the number of bits per chip getting quite high, and newer nodes getting successively harder to develop, the odds of a single-bit error is getting uncomfortably high. So on-die ECC is meant to counter that, by transparently dealing with single-bit errors.

      It's similar in concept to error correction on SSDs (NAND): the error rate is high enough that a modern TLC SSD without error correction would be unusable without it. Otherwise if your chips had to be perfect, these ultra-fine processes would never yield well enough to be usable.

      Consequently, DIMM-wide ECC will still be a thing. Which is why in the JEDEC diagram it shows an LRDIMM with 20 memory packages. That's 10 chips (2 ranks) per channel, with 5 chips per rank. The 5th chip is to provide ECC. Since the channel is narrower, you now need an extra memory chip for every 4 chips rather than every 8 like DDR4. Reply
    • Ryan Smith - Tuesday, July 14, 2020 - link

      And to quote SK Hynix

      "On-die error correction code (ECC)3 and error check and scrub (ECS), which were first to be adopted in DDR5, also allow for more reliable technology node scaling by correcting single bit errors internally. Therefore, it is expected to contribute to further cost reduction in the future. ECS records the DRAM defects and provides the error counts to the host, thereby increasing transparency and enhancing the reliability, availability, and serviceability (RAS) function of the server system." https://news.skhynix.com/why-ddr5-is...xt-gen-memory/
    Some info from Anadtech.

    Leave a comment:


  • avem
    replied
    Originally posted by kylew77 View Post
    Correct me if I'm wrong, I may not have understood what I was reading but doesn't all DDR 5 come with ECC built in? Or some form of ECC? Really a fan of the technology especially as we put more and more RAM into systems like laptops. Almost all my RAM at work is ECC DDR 4 and DDR 3.
    It's a "fake" ECC, i.e. it's within modules themselves and proper ECC must be supported not only by RAM, but by the motherboard and CPU as well. It's still better than nothing but it's not a panacea.
    Last edited by avem; 23 November 2021, 02:11 PM.

    Leave a comment:


  • chuckula
    replied
    Originally posted by bug77 View Post

    Except, as the test results show, they're already mitigated. None of the tests lags behind because of that latency.

    Btw, at the same data rate, you can just compare latency in clock cycles directly, no need to convert to ns first. 36/19=1.90 just the same
    The tests here are basically non-interactive and not too sensitive to latency (I'd say the compiler benchmarks are the most latency sensitive and you see the smallest performance delta).
    I'm not against DDR5, but in games and things like browser benchmarks you might not see an advantage until the clocks for DDR5 are substantially higher than DDR4.

    Leave a comment:


  • bug77
    replied
    Originally posted by xnor View Post
    And you will get free extra latency with that!! The throughput tests are nice for some applications, but they don't show the regressions in DDR5's increased latency for applications that have a large working set and do more random accesses, especially if the amount of data for each access is small.

    DDR4 4400 CL19 = 8.6ns
    DDR5 4400 CL36 = 16.4ns (+90%)

    This can only be partially mitigated with larger CPU caches.
    Except, as the test results show, they're already mitigated. None of the tests lags behind because of that latency.

    Btw, at the same data rate, you can just compare latency in clock cycles directly, no need to convert to ns first. 36/19=1.90 just the same

    Leave a comment:

Working...
X