Announcement

Collapse
No announcement yet.

JEDEC Publishes DDR5 Standard - Launching At 4.8 Gbps, Better Power Efficiency

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by artivision View Post
    People, ram price goes down half when the next model is out, example:
    GDDR6 = 11$ per GB
    GDDR5 = 5.5$ per GB
    HBM2 = 11$ per GB + 25$ interposer
    HBM1 = 5.5$ per GB + 25$ interposer
    Also Intel Apus do have 64-256MB EDcache, they don't depend on DDR4 that much.
    You are severely understating the complexity of HBM2.
    You need an interposer structure that would fit a server sized mainboard and WAY more than 16-32G RAM.
    Your cost model is something from a GPU with a very locally designed memory hierarchy.
    Also, servers need other memory aspects. Such as ECC. While ECC is a part of HBM2, I don't think the cost model of GPUs have ECC in them.
    Last edited by milkylainen; 15 July 2020, 02:50 AM.

    Comment


    • #32
      Originally posted by wizard69 View Post
      a Small Form Factor machine today will beat the pants off a 5 year old desktop tower machine.
      Complete bullshit, a Sandybridge i7 like 2600k will still laugh at anything with a 15w of TDP, with a big margin.

      I mean, Ok a modern SFF office machine will be better than a 5 year old desktop office machine, but office applications didn't change significantly so you don't necessarily need that.
      Last edited by starshipeleven; 15 July 2020, 03:44 AM.

      Comment


      • #33
        Originally posted by starshipeleven View Post
        Complete bullshit, a Sandybridge i7 like 2600k will still laugh at anything with a 15w of TDP, with a big margin.
        Are you sure?



        https://www.cpubenchmark.net/compare...800U/868vs3721

        Oh and let's not forget that Sandy Bridge lacks INVPCID so Meltdown will limit it in I/O due to mitigations. Other mitigations also apply.
        Last edited by numacross; 15 July 2020, 04:06 AM.

        Comment


        • #34
          Originally posted by Brane215 View Post
          [...]and ECC will be off-the-shelf normal...
          A man can dream. For sure, a man can dream...

          Comment


          • #35
            Originally posted by numacross View Post

            Are you sure?



            https://www.cpubenchmark.net/compare...800U/868vs3721

            Oh and let's not forget that Sandy Bridge lacks INVPCID so Meltdown will limit it in I/O due to mitigations. Other mitigations also apply.
            Not sure real margin will be this great if at all. Have you taken into account that laptop tests are peak performance, while sustained will be quite a lot lower due to throttling?
            Though in truth I didn't have a chances to work with 4800U yet. But comparing i7-3720qm with i7-8650U is completely unfair for the latter on sustained load.

            Comment


            • #36
              Originally posted by blacknova View Post

              Not sure real margin will be this great if at all. Have you taken into account that laptop tests are peak performance, while sustained will be quite a lot lower due to throttling?
              Though in truth I didn't have a chances to work with 4800U yet. But comparing i7-3720qm with i7-8650U is completely unfair for the latter on sustained load.
              This is a fair point. Intel laptop CPUs tend to burst high and then throttle, but AMD Renoir is different - it's able to sustain top performance for far longer. This benchmark is of the higher TDP version since I couldn't find the 15W one, but I suspect the lower power usage would allow to sustain top performance for even longer. The lowest score of this 35W TDP 4900HS is ~1600 while a 2600K scores 612. I suspect the 15W one would score around 650-680 in the same test still making it faster than Sandy Bridge.
              It all depends on the use-case, however.

              BTW in the quoted notebookcheck benchmark the 14" Ryzen destroys a 17" 10th gen Intel

              Comment


              • #37
                Originally posted by milkylainen View Post
                Yes, I know. It was merely a typo, but thanks for correcting me. ...
                But I don't think it will ever catch plain PCB-mounting DDRs in that aspect.
                I wasn't sure if I should correct it, since I usually don't want to identify as a grammar/spelling bitch, but good to know you didn't mind it. I only want more people to become aware of HBM and what it means.

                Regarding PCB mounting will we have to wait and see, but I'm sure DDR will come to an end like all the other memory technologies before it.

                I started with electronics when one could still easily solder IC components to wires and anyone could create a PCB at home with UV light and an acid bath. Today it's become so tiny that my eyes cannot even pick up the writing and codes on the components. It just keeps getting smaller and denser.

                I wouldn't actually mind to see memory banks disappear. Memory, when it works, is great, but with all the choices does one still end up buying new modules with every new CPU, even if it's only to make use of higher frequencies. And when the memory timing is off, or the airflow isn't right or a bit of dust got caught in a socket, then all this freedom turns into a hell of finding the cause. I then usually just buy as much memory I can fit into, run memory tests for days and hope to never have to do it again. So I wouldn't mind to lose this ability when it means I get faster memory and it ends up working more reliably without me going through the same repeating ritual.
                Last edited by sdack; 15 July 2020, 10:27 AM.

                Comment


                • #38
                  Originally posted by ms178 View Post
                  More interesting than DDR5 would be to see HBM memory used on the CPU package as a new tier of memory between on-chip SRAM and the DDR memory interface.
                  That would be interesting to see, but considering the cost I'm not convinced it's actually worth it. Only CPU use that I've seen so far was that MCM from Intel with an Intel CPU die, a Polaris-based GPU die from AMD and a 4GB HBM2 die for that GPU to make up the low bandwidth of DDR4 memory.

                  General purpose CPUs, working out of considerably faster SRAM-based cache memory for the vast majority of the time, are much more responsive to memory latency rather than bandwidth and HBM (High Bandwidth Memory), as the name implies, is all about bandwidth.

                  Don't get me wrong thou, we could start seeing a lot of CPUs with HBM pretty soon. Intel has been investing heavily into their own GPUs and are bringing in a big set of new machine learning focused vector instructions and functionality into their new CPUs in the very near future. Both of these applications are highly bandwidth-dependent and would benefit from HBM to a very high degree. However it could also be that DDR5 ends up making HBM, with all of the extra silicon, costs and additional constraints required to implement it, simply redundant.

                  Comment


                  • #39
                    Originally posted by sdack View Post
                    I wasn't sure if I should correct it, since I usually don't want to identify as a grammar/spelling bitch, but good to know you didn't mind it.
                    I don't mind being corrected at all. If it is indeed fact or can be reasonably proven by fact, only an idiot would argue.
                    I'm sure there are people for that too, but I try to avoid that trap atleast.

                    Originally posted by sdack View Post
                    I wouldn't actually mind to see memory banks disappear. Memory, when it works, is great, but with all the choices does one still end up buying new modules with every new CPU, even if it's only to make use of higher frequencies. And when the memory timing is off, or the airflow isn't right or a bit of dust got caught in a socket, then all this freedom turns into a hell of finding the cause. I then usually just buy as much memory I can fit into, run memory tests for days and hope to never have to do it again. So I wouldn't mind to lose this ability when it means I get faster memory and it ends up working more reliably without me going through the same repeating ritual.
                    Agreed. And you're probably bang on for select configurations. More static ones or high-end static ecosystem types. Esp. in volume.
                    I was thinking like phones, consoles, media boxes, high-end fixed-configuration desktops, high-end laptops, high-end macs...
                    HBM2 definitely serves a purpose and has several roles to fill.
                    But the "flexible, scaling from budget to high-end without costing an arm and a leg"-type builds are going to be difficult...

                    ... Unless someone starts doing custom interposers for peanuts.
                    ... Or maybe we can get a few standardized physical memory layouts with regards to socket placement etc? That would be interesting.
                    Then you could have more volume series interposers, instead of a custom interposer per motherboard (major pain).

                    I like HBM2. But I like the situation of challenging the mainstay even more.
                    New ideas. New solutions. New thinking. Not: "Everybody should adhere to the same because it's easy."

                    Comment


                    • #40
                      Originally posted by L_A_G View Post

                      That would be interesting to see, but considering the cost I'm not convinced it's actually worth it. Only CPU use that I've seen so far was that MCM from Intel with an Intel CPU die, a Polaris-based GPU die from AMD and a 4GB HBM2 die for that GPU to make up the low bandwidth of DDR4 memory.

                      General purpose CPUs, working out of considerably faster SRAM-based cache memory for the vast majority of the time, are much more responsive to memory latency rather than bandwidth and HBM (High Bandwidth Memory), as the name implies, is all about bandwidth.

                      Don't get me wrong thou, we could start seeing a lot of CPUs with HBM pretty soon. Intel has been investing heavily into their own GPUs and are bringing in a big set of new machine learning focused vector instructions and functionality into their new CPUs in the very near future. Both of these applications are highly bandwidth-dependent and would benefit from HBM to a very high degree. However it could also be that DDR5 ends up making HBM, with all of the extra silicon, costs and additional constraints required to implement it, simply redundant.
                      As I've read elsewhere, there is a school of thought that considers more cost-effective alternatives than going the HBM route (https://www.nextplatform.com/2019/08...r10-architect/) - at least the Lead architect of IBM's Power CPUs thinks so ("[Could we] build something that’s like a standard DIMM form factor with either a GDDR or an LPDDR memory technology and it gives you capabilities that are approaching a more exotic HBM. Yes, we can do that.").

                      But there are also proponents for the mentioned idea of an L4 cache with HBM (https://www.techdesignforums.com/pra...-applications/). I guess we will see which approach will prevail in the end.

                      Comment

                      Working...
                      X