Originally posted by marty1885
View Post
Announcement
Collapse
No announcement yet.
Radeon Vega 20 Will Have XGMI - Linux Patches Posted For This High-Speed Interface
Collapse
X
-
Originally posted by marty1885 View PostAnyone knows why they don't use the Infinity Fabric to interconnect the GPU dies?
Remember that HBM2 was introduced to solve this problem and if you are lucky and have a Vega with overclocked and stable 1100+ MHz on the HBM2, you can start to see the end of the tunnel, where the memory bottleneck is not the biggest bottleneck anymore. Now you take the fastest HBM2 memory available and you have the interconnect requirement for a successful GPU chiplet design.
I do not know of a current tech or prototype interconnect that is capable of throwing ~600 GB/s (which overclocked HBM2 does). I guess the only option is to design a GPU that is already pre-fabbed as a pseudo chiplet - meaning the IMC already assumes its only part of for example 4 GPUs and all 4 chiplets together act as a coherent chip. Similar of just splitting a big monolithic chip into parts and re-fuse them together. But this would take a very hefty binning requirement on all of the chips, as the whole thing is only as fast as the slowest part. Which means you need to bin all HBM2 parts, all of the GPU chiplets and probably even test the interconnect (if it would be an active one which is kind of a research prototype right now).
While the binning costs might increase the price of the whole thing slightly, you would now have the ability to manufacture different products depending on different binning. High-end performance for the top 5% of *all* components, above-average products for the top 75%-95%, average price/performance parts for the 33%-74% binnings and cheap lower end parts for the rest. The tight binning also enables for a better parametric control (like frequency/voltage balance) so it wouldn't end up like the Vega launch.
- Likes 4
Comment
-
-
Google points to this: https://en.wikichip.org/wiki/amd/infinity_fabric
However, the inter-die connection is called "IFOP" and the inter-package connection is called "IFIS" in that article.
SDF - Scalable Data Fabric
GMI - Global Memory Interconnect
What the X in XGMI stands for I don't know.
- Likes 1
Comment
Comment