Announcement

Collapse
No announcement yet.

Radeon Vega 20 Will Have XGMI - Linux Patches Posted For This High-Speed Interface

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • bridgman
    replied
    Originally posted by chithanh View Post
    What the X in XGMI stands for I don't know.
    eXternal (off-package) GMI

    Leave a comment:


  • chithanh
    replied
    Google points to this: https://en.wikichip.org/wiki/amd/infinity_fabric
    However, the inter-die connection is called "IFOP" and the inter-package connection is called "IFIS" in that article.

    SDF - Scalable Data Fabric
    GMI - Global Memory Interconnect
    What the X in XGMI stands for I don't know.

    Leave a comment:


  • andrei_me
    replied
    Originally posted by bridgman View Post

    Infinity Fabric is the umbrella term covering on-die (SDF), inter-die (GMI) and inter-package (XGMI) connections.
    Where I can find the meaning of those acronym? Wikipedia is returning some random results...

    Leave a comment:


  • nils_
    replied
    Originally posted by TemplarGR View Post
    I am more interested in PCIE 4.0 support. It was about time... I wonder when cpus/mobos will support it.
    It's already supported on POWER9.

    Leave a comment:


  • chithanh
    replied
    Originally posted by marty1885 View Post
    Anyone knows why they don't use the Infinity Fabric to interconnect the GPU dies?
    AMD does use Infinity Fabric for CPU/GPU interconnection, just not the XGMI variant for GPUs until now.

    Leave a comment:


  • Shevchen
    replied
    Originally posted by marty1885 View Post
    Anyone knows why they don't use the Infinity Fabric to interconnect the GPU dies?
    As far as I know, latency is still too high for GPU chiplet interconnection. In theory you could already do it, but Infinity Fabric is too slow for the massive memory bandwidth requirementns GPU have - esp. in games.

    Remember that HBM2 was introduced to solve this problem and if you are lucky and have a Vega with overclocked and stable 1100+ MHz on the HBM2, you can start to see the end of the tunnel, where the memory bottleneck is not the biggest bottleneck anymore. Now you take the fastest HBM2 memory available and you have the interconnect requirement for a successful GPU chiplet design.

    I do not know of a current tech or prototype interconnect that is capable of throwing ~600 GB/s (which overclocked HBM2 does). I guess the only option is to design a GPU that is already pre-fabbed as a pseudo chiplet - meaning the IMC already assumes its only part of for example 4 GPUs and all 4 chiplets together act as a coherent chip. Similar of just splitting a big monolithic chip into parts and re-fuse them together. But this would take a very hefty binning requirement on all of the chips, as the whole thing is only as fast as the slowest part. Which means you need to bin all HBM2 parts, all of the GPU chiplets and probably even test the interconnect (if it would be an active one which is kind of a research prototype right now).

    While the binning costs might increase the price of the whole thing slightly, you would now have the ability to manufacture different products depending on different binning. High-end performance for the top 5% of *all* components, above-average products for the top 75%-95%, average price/performance parts for the 33%-74% binnings and cheap lower end parts for the rest. The tight binning also enables for a better parametric control (like frequency/voltage balance) so it wouldn't end up like the Vega launch.

    Leave a comment:


  • TemplarGR
    replied
    I am more interested in PCIE 4.0 support. It was about time... I wonder when cpus/mobos will support it.

    Leave a comment:


  • bridgman
    replied
    Originally posted by marty1885 View Post
    Anyone knows why they don't use the Infinity Fabric to interconnect the GPU dies?
    Infinity Fabric is the umbrella term covering on-die (SDF), inter-die (GMI) and inter-package (XGMI) connections.

    Leave a comment:


  • Peter Fodrek
    replied
    Originally posted by bridgman View Post

    Closer to home, its role is also similar to XGMI in dual-Epyc systems.
    but they do so
    i the article
    "XGMI is a peer-to-peer high-speed interconnect and is based on Infinity Fabric"

    Leave a comment:


  • theriddick
    replied
    If they can do it on the hardware side of things, yay, but if its anything like SLI or Crossfire which require driver and software workarounds which almost never get fully supported and when they do they result in performance/graphics render issues! , no thanks.

    Leave a comment:

Working...
X