Originally posted by theriddick
View Post
Announcement
Collapse
No announcement yet.
AMD Graphics Driver Gets "More New Stuff" For Linux 6.9: Continued RDNA4 Enablement
Collapse
X
-
Chiplet deisgn has less chiplet to chiplet latency when compered to multi chip approaches but its not magic, the interchiplet latency is still higher than in a monolithic design. Either your workload is okay with that, or its not. Games are not ok with this so you need to get creative with your design/ do extra latency hiding where you can. This dosent make chiplets "dead".
This isent anything new either, every epic cpu ever has performed better when the workload is numa aware and you pretend eatch cpu core chiplet is a numa node.
Comment
-
Originally posted by DiamondAngle View PostThis dosent make chiplets "dead".
It's dead as a door nail until they fix the low bandwidth and latency issues.
Comment
-
Originally posted by theriddick View PostGame developers will not be going back to the days of SLI and CF software tweaking to get dual GPU working just to satisfy chiplet designs.
It's dead as a door nail until they fix the low bandwidth and latency issues.
This could potentially last multiple years and nothing can be done with older games. Releasing GPUs where old games run worse than on previous gen is not likely to happen.
Comment
-
Never mind that this is a ridiculously narrow view of chip utility since not everything is a game and compute workloads are very well suited to numa-like architectures, gpu manufactures have released extremely successful gpus that had lots of transistors dedicated to features past and current games at the time dident use and game developers adapted. Adding programmable pixel shaders comes to mind, as dose the addition of hardware raytraceing.
If both major gpu chip makers find no other way to scale their designs, chiplets it will be and game engine developers will adapt to use the added horsepower and features enabled by the chiplets, as they have always done.
Comment
-
Originally posted by DiamondAngle View PostNever mind that this is a ridiculously narrow view of chip utility since not everything is a game and compute workloads are very well suited to numa-like architectures
If both major gpu chip makers find no other way to scale their designs
If anyone can split the die without sacrificing current/old workloads they have a clear advantage. Or if a single chiplet is faster than last gens monolithic dies there might also be a path forward.
Comment
-
Originally posted by Anux View PostWhy should they not find a way to scale? Chiplets are mostly used to combine different lithography processes (for cost reasons) and split big monolithic dies into cheaper small ones with better yield. Look at Nvidia, they can certainly scale monolithic dies to insane levels.
Comment
Comment