Announcement

Collapse
No announcement yet.

Radeon "GFX90A" Added To LLVM As Next-Gen CDNA With Full-Rate FP64

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Qaridarium
    replied
    Originally posted by coder View Post
    That sounds like a workstation use case, though. I have just 16 GBs in my PCs and it's plenty. At work, we use 32 GB and I never see it fully-utilized.
    For someone who needs lots and lots of memory, like server users or maybe even Threadripper Pro, I already agree that a more scalable memory system makes sense. Indeed, that's who they designed it for.
    not only threadripper pro ... i do have a threadripper 1920X and i do have 128gb ECC ram.- it is 700€
    if i switch to a DDR5 system i have to buy it again but with openCAPI i could use my old ram in my new system
    and i think to save 700€ on a system upgrade it is a huge impact.

    Leave a comment:


  • coder
    replied
    Originally posted by Qaridarium View Post
    i had thoughts about this and i discovered a usercase what save you a lot of money in the long run.
    for example if your old pc has ddr4 and your new pc has ddr5 you need new ram this means 500-800€ for like 128gb...
    That sounds like a workstation use case, though. I have just 16 GBs in my PCs and it's plenty. At work, we use 32 GB and I never see it fully-utilized.

    For someone who needs lots and lots of memory, like server users or maybe even Threadripper Pro, I already agree that a more scalable memory system makes sense. Indeed, that's who they designed it for.

    Leave a comment:


  • Qaridarium
    replied
    Originally posted by coder View Post
    Yes, that's a nice benefit. FWIW, over the life of my PCs (usually 5-10 years), I tend to upgrade storage, GPU, and RAM once. And for RAM, I typically double the original capacity.
    I am not arguing against OpenCAPI, in general. I just do not foresee it coming to the desktop. I guess we'll just wait and see if I'm right.
    i had thoughts about this and i discovered a usercase what save you a lot of money in the long run.
    for example if your old pc has ddr4 and your new pc has ddr5 you need new ram this means 500-800€ for like 128gb...

    now imagine if your old system is OpenCAPI and the new one is also openCAPI

    then you can just use the old ram for the new system. an later if the ram prices go down you can upgrade it.

    this saves you a lot of money.

    Leave a comment:


  • coder
    replied
    Originally posted by Qaridarium View Post
    then it is true that you get more and more GB of ram for less and less money.
    Yes, that's a nice benefit. FWIW, over the life of my PCs (usually 5-10 years), I tend to upgrade storage, GPU, and RAM once. And for RAM, I typically double the original capacity.

    I am not arguing against OpenCAPI, in general. I just do not foresee it coming to the desktop. I guess we'll just wait and see if I'm right.

    Leave a comment:


  • Qaridarium
    replied
    Originally posted by coder View Post
    Not for servers, which already use registered memory and stand to benefit most from a more scalable memory architecture. But you really can't say it's not more expensive to put a memory controller on every DIMM, rather than just have one inside the CPU. There's a reason why they moved it from the motherboard (where it was previously part of what was once called a "Northbridge" chip) and into the CPU!
    If your existing RAM is already close to maxing out the link speed, then it's also no benefit! Indeed, from what I see, the point is really about scaling capacity, rather than scaling speed. The way they get speed benefits is simply by decoupling the RAM from the CPU, so they can scale up the number of channels.
    just do some computer history.. SDRAM DDR1 DDR2 DDR3 DDR4 DDR5...
    thry did ram in 13nm in 64nm in 34nm in 24nm in 16 nm in 10nm in 7nm in 5nm
    and the ram price per GB drop down ddr1 was more expensive per 1gb than ddr2 and ddr3 is more expensive per GB than ddr4 and as soon as the ram is build in smaler nm node the price per GB goes down.

    so even if your claim is right and you do not get higher performance because your ram is maxing out the link speed
    then it is true that you get more and more GB of ram for less and less money.

    the first ram and first IBM power9 system maybe is high so you say it is expensive but over the long run scaling capacity on old systems become more and more cheap lets say you have an stone age old system 10 years... you will get a lot of ram very cheap if you jump from ddr3 to ddr4 to ddr5 and ddr6...

    "The way they get speed benefits is simply by decoupling the RAM from the CPU, so they can scale up the number of channels"

    and this sounds also very good. you get the speed of a 240pin DDR4 on only 40 pin...
    this means you can build systems with a lot more ram channels..

    so you have to admit that this is very good anti-obsolescence technology. and it really could revolutionise computer history.

    yes as you said if you buy a desktop system with it it could be more expensive at start but after 10 years you could save a lot of money.

    my TR 1920X i now bought 128gb ECC ram with 3200mhz... but my mainboard can handle 4000mhz ram

    but there is no DDR4 ECC ram with 4000mhz but they could build DDR5 ram with ECC at 4000mhz

    this means with open-CAPI i would get my 4000mhz ECC ram years later.

    Leave a comment:


  • coder
    replied
    Originally posted by Qaridarium View Post
    do you really think it is designed to "to add a lot of cost" ??? no its not.
    Not for servers, which already use registered memory and stand to benefit most from a more scalable memory architecture. But you really can't say it's not more expensive to put a memory controller on every DIMM, rather than just have one inside the CPU. There's a reason why they moved it from the motherboard (where it was previously part of what was once called a "Northbridge" chip) and into the CPU!

    Originally posted by Qaridarium View Post
    if you have a server you have openCAPI DDR4 ram and later you want to upgrade it to DDR5 or GDDR6x thats no problem.
    If your existing RAM is already close to maxing out the link speed, then it's also no benefit! Indeed, from what I see, the point is really about scaling capacity, rather than scaling speed. The way they get speed benefits is simply by decoupling the RAM from the CPU, so they can scale up the number of channels.

    Leave a comment:


  • Qaridarium
    replied
    Originally posted by coder View Post
    Okay, here's what you're talking about:
    https://www.anandtech.com/show/14706...mory-interface
    It appears to add a lot of cost, and yet normal DDR4 or DDR5 can already saturate the link. So, it's not a real way to cheat the constraints of GDDR memories, nor does it really add anything over DDR, in this case. However, it does add latency, which is already higher in GDDR memories than their regular DDR cousins.
    Basically, OpenCAPI doesn't make sense for a desktop PC or most workstations. It's a server-oriented technology.
    do you really think it is designed to "to add a lot of cost" ??? no its not.
    it is designed for flexibility for example if you have a task who does not need fast ram but a lot of ram you can build openCAPI SSD
    then it is much cheaper per 1GB of ram...
    it is also designed to reduce Obsolescence means if you have a server you have openCAPI DDR4 ram and later you want to upgrade it to DDR5 or GDDR6x thats no problem.
    and in this meaning it can reduce costs because for an old system you do not need to buy new system for DDR5 you just upgrade your old DDR4 system with DDR5... and imagine this: in future there is super cheap DDR6 ram no problem you upgrade to DDR6...

    so its not designed to add a lot of cost it is designed to reduce the costs in the long run.

    Leave a comment:


  • coder
    replied
    Originally posted by Qaridarium View Post
    the point with openCAPI is that you can put anything you want on a DIMM even GDDR ram.
    Okay, here's what you're talking about:

    https://www.anandtech.com/show/14706...mory-interface

    It appears to add a lot of cost, and yet normal DDR4 or DDR5 can already saturate the link. So, it's not a real way to cheat the constraints of GDDR memories, nor does it really add anything over DDR, in this case. However, it does add latency, which is already higher in GDDR memories than their regular DDR cousins.

    Basically, OpenCAPI doesn't make sense for a desktop PC or most workstations. It's a server-oriented technology.
    Last edited by coder; 24 February 2021, 12:42 AM.

    Leave a comment:


  • Qaridarium
    replied
    Originally posted by coder View Post
    It's just one possibility.
    The Navi 21 die (RX 6800 & 6900) is reportedly about 520 mm2, and those cards have list prices ranging from $580 to $1000. Part of it is that cost increases disproportionately with area, but they also have to account for non-recurring engineering costs, market conditions, etc.
    i am sure if they build a 400mm² CDNA card it will easily outperform a 520mm² 6900....
    because the 6900 has all the 3D stuff you just don't need in compute.

    Originally posted by coder View Post
    It still needs to out-perform other options on the market, but you could easily cut it down by more than half and still do that. Take another look at the stats on their matrix cores.
    The main point is that if the general public has no GPUs or other accelerators with CDNA features, then AMD shouldn't be surprised if virtually the only opensource software that uses them is what AMD writes, itself. If there's a lesson AMD could take away from Nvidia's runaway success in AI, it's to seed the Universities and general public with hardware that can be used to power the next wave of software innovations.
    Yes, that's along the lines of what I'm saying.
    right now for AMD it makes no sense to build cheaper CDNA compute card and the reason for this is simple:
    they work at 100% capacity of the 5/7/12nm fabs.
    believe it ot not but this is simple not possible.

    if a 7nm fab runs at 50% capacity you can say ok lets make a cheap card.
    but if you run at 100% you would be very crazy person to make a cheap card.

    the only way right now to build a cheap CDNA card would be to backport it to 12nm or 16nm

    a card like this would still be fast because of the very good architectur but in the end you would not be happy because it is not 5nm.

    Leave a comment:


  • Qaridarium
    replied
    Originally posted by coder View Post
    My point was specifically about putting GDDR on DIMMs. Where does it say they do that?
    the point with openCAPI is that you can put anything you want on a DIMM even GDDR ram.

    Leave a comment:

Working...
X