Announcement

Collapse
No announcement yet.

AMD Graphics Driver Gets "More New Stuff" For Linux 6.9: Continued RDNA4 Enablement

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by theriddick View Post
    I don't believe that is a issue or how it works. The problem is internal bandwidth and latency between the dies, not software compatibility. This is a hardware level multi chip system, not SLI or CF.
    At least that's what AMD said a while back when asked about chiplets on desktop GPUs. And when you look at CDNA there is already a full chiplet approach at work, so the hardware side seems to not be the problem.

    Comment


    • #12
      Chiplet design is meant to be seamless. The whole point is its not meant to be software dependent. So if it suddenly needs software to make it work, then its very dead except for server uses.

      Comment


      • #13
        Chiplet deisgn has less chiplet to chiplet latency when compered to multi chip approaches but its not magic, the interchiplet latency is still higher than in a monolithic design. Either your workload is okay with that, or its not. Games are not ok with this so you need to get creative with your design/ do extra latency hiding where you can. This dosent make chiplets "dead".
        This isent anything new either, every epic cpu ever has performed better when the workload is numa aware and you pretend eatch cpu core chiplet is a numa node.

        Comment


        • #14
          Originally posted by DiamondAngle View Post
          This dosent make chiplets "dead".
          Game developers will not be going back to the days of SLI and CF software tweaking to get dual GPU working just to satisfy chiplet designs.
          It's dead as a door nail until they fix the low bandwidth and latency issues.

          Comment


          • #15
            Originally posted by theriddick View Post
            Game developers will not be going back to the days of SLI and CF software tweaking to get dual GPU working just to satisfy chiplet designs.
            It's dead as a door nail until they fix the low bandwidth and latency issues.
            If Nvidia and AMD would release a chiplet high end GPU, game devs most likely would deal with that. But most of the work is on the engine side and game devs would "only" need to wait till the engine supports this.

            This could potentially last multiple years and nothing can be done with older games. Releasing GPUs where old games run worse than on previous gen is not likely to happen.

            Comment


            • #16
              It will be much harder to become a standard then RT/PT! Which is still not the normal. Especially Path Tracing.

              People will never buy a GPU they can only use %50 of it because most games don't support multi-GPU on engine level.

              Comment


              • #17
                Never mind that this is a ridiculously narrow view of chip utility since not everything is a game and compute workloads are very well suited to numa-like architectures, gpu manufactures have released extremely successful gpus that had lots of transistors dedicated to features past and current games at the time dident use and game developers adapted. Adding programmable pixel shaders comes to mind, as dose the addition of hardware raytraceing.

                If both major gpu chip makers find no other way to scale their designs, chiplets it will be and game engine developers will adapt to use the added horsepower and features enabled by the chiplets, as they have always done.

                Comment


                • #18
                  Originally posted by theriddick View Post
                  Maybe RDNA4 can be what RDNA3 should have been.,
                  They don't even support RDNA 3.

                  Comment


                  • #19
                    Originally posted by DiamondAngle View Post
                    Never mind that this is a ridiculously narrow view of chip utility since not everything is a game and compute workloads are very well suited to numa-like architectures
                    As said the CDNA arch does use chiplets for that exact reason. And apart from some short term crypto booms radeon and geforce are mostly bought by gamers and whichever company decides to introduce slower GPUs that might get faster in 5 years with some new games should better have a very big financial buffer.

                    If both major gpu chip makers find no other way to scale their designs
                    Why should they not find a way to scale? Chiplets are mostly used to combine different lithography processes (for cost reasons) and split big monolithic dies into cheaper small ones with better yield. Look at Nvidia, they can certainly scale monolithic dies to insane levels.

                    If anyone can split the die without sacrificing current/old workloads they have a clear advantage. Or if a single chiplet is faster than last gens monolithic dies there might also be a path forward.

                    Comment


                    • #20
                      Originally posted by Anux View Post
                      Why should they not find a way to scale? Chiplets are mostly used to combine different lithography processes (for cost reasons) and split big monolithic dies into cheaper small ones with better yield. Look at Nvidia, they can certainly scale monolithic dies to insane levels.
                      Because recent lithography improvements have only yielded relatively modest improvements in density, esp sram cells have barely shrunk, which is bad news for games, which benefit greatly from larger caches. For this reason the 4090 die is already mostly sram by area. They cant just increase the die size because of reticle and yeild issues Thus gpus makers will find it difficult to scale monolithic designs. This will likely force them to go chiplets to scale futher, first presumably by offloading some of the sram onto a chiplet like amd has done on cpus already, later also by doing compute chiplets like cdna.

                      Comment

                      Working...
                      X