Announcement

Collapse
No announcement yet.

AMD Announces Radeon RX 7900 XTX / RX 7900 XT Graphics Cards - Linux Driver Support Expectations

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by coder View Post
    Four years ago was the launch of an APU-based Chinese console that doubled as a desktop PC. Specs-wise, it fit right in between PS4 and PS5. It had GDDR5, which I was interested to see benchmarked and compared with a normal Zen APU. More details here:
    We might see a big APU again in some special or niche product (like Valve desktop console based on SteamOS, "SteamBox" or whatever hypothetically speaking), but that's it. A big APU really does not have a good use-case in Windows/PC ecosystem. Desktops would not have any benefit at all from such a APU: on contrary, they would be limited in the terms of upgradeability and modularity. It could be an interesting "gaming mini PC" platform, somewhat similar to Intel extreme nuc stuff, but it's a niche product. Most of the notebooks sold are actually ultrabooks/ultrathins, so a big APU would make sense only in the gaming notebooks. But yet again, a) it would mean a vendor lock-in for GPU from the OEM point of view and b) in a case of chiplets we are talking basically VRAMless dGPU system and I don't see much of the sense in such a concept. You can save some power by putting GPU near CPU but that's all.

    Comment


    • Originally posted by coder View Post
      No... the workstation boards are probably the visualization products you have in mind, but they also sell rebadged gaming GPUs for servers. The non 100-numbered products here are all powered by gaming GPUs, but they have only passive cooling and no display outputs:

      The NVIDIA data center platform is the world’s most adopted accelerated computing solution, deployed by the largest supercomputing centers and enterprises. Whether you're looking to solve business problems in deep learning and AI, HPC, graphics, or virtualization in the data center or at the edge, NVIDIA GPUs provide the ideal solution. Now, you can realize breakthrough performance with fewer, more powerful servers, while driving faster time to insights and reducing costs.
      Not at all. When I said visualization, I meant it - regardless of server or workstation use. It even says so on NVIDIA's own product page. I consider consumer gaming cards to also fall under the visualization banner.


      Originally posted by coder View Post
      The way I'd characterize the split is that the 100-series are for training and HPC, while the lower-numbered products are for inferencing, video transcoding, desktop hosting, cloud gaming - basically, everything else you'd do with a GPU in a cloud server.
      According to NVIDIA's A100 and H100 product pages, those accelerators (NVIDIA still calls them "GPUs") are dedicated to both training and inference and I'd guess are far more capable at such tasks than the visualization products.


      Originally posted by coder View Post
      Rubbish. If you include DLSS, then probably the only silicon underutilized by games is the NVDEC/NVENC block.


      I didn't include DLSS because it is not needed to play games. I'd argue that ray tracing is more of a "need" since it improves visual fidelity and makes life easier for the game developers.


      Originally posted by coder View Post
      I'm using "gaming" as a short-hand. It does seem like gaming prowess is still their first and foremost concern, however.


      If we substitute "gaming" for "visualization", I think that NVIDIA's direction will ultimately lead to real-time ray-traced and fully immersive virtual environments. The gaming industry will lead the charge for obvious reasons, but I think ultimately that playing games will become a very small part of a much larger landscape (literally).​

      Comment


      • Originally posted by drakonas777 View Post

        We might see a big APU again in some special or niche product (like Valve desktop console based on SteamOS, "SteamBox" or whatever hypothetically speaking), but that's it. A big APU really does not have a good use-case in Windows/PC ecosystem. Desktops would not have any benefit at all from such a APU: on contrary, they would be limited in the terms of upgradeability and modularity. It could be an interesting "gaming mini PC" platform, somewhat similar to Intel extreme nuc stuff, but it's a niche product. Most of the notebooks sold are actually ultrabooks/ultrathins, so a big APU would make sense only in the gaming notebooks. But yet again, a) it would mean a vendor lock-in for GPU from the OEM point of view and b) in a case of chiplets we are talking basically VRAMless dGPU system and I don't see much of the sense in such a concept. You can save some power by putting GPU near CPU but that's all.
        Plenty of users benefit from GPU performance these days... even if they dont know it.

        The power savings and PCB space savings would be huge in laptops (See: the M1 Pro/Ultra), but I think OEMs just weren't interested in it until the M1 came around.


        After all, they rejected the Intel's eDRAM Broadwell chips, they largely rejected the AMD/Intel hybrid package, and they reportedly rejected Van Gogh (the Steam Deck chip).
        Last edited by brucethemoose; 07 November 2022, 12:54 PM.

        Comment


        • Originally posted by drakonas777 View Post
          We might see a big APU again in some special or niche product (like Valve desktop console based on SteamOS, "SteamBox" or whatever hypothetically speaking), but that's it. A big APU really does not have a good use-case in Windows/PC ecosystem. Desktops would not have any benefit at all from such a APU: on contrary, they would be limited in the terms of upgradeability and modularity.
          Lack of upgradability didn't stop Apple. The M1 Max and Ultra are exactly big APUs.

          I wonder if one of the big PC OEMs: Dell, Lenovo, HP, etc. would ever contract with AMD (or Intel) to make a big APU with in-package LPDDR5.

          Comment


          • Originally posted by brucethemoose View Post
            The power savings and PCB space savings would be huge in laptops (See: the M1 Pro/Ultra), but I think OEMs just weren't interested in it until the M1 came around.
            Intel was trying, with its Iris models that featured 2x or 3x the normal amount of EUs and up to 128 MB of eDRAM.

            Originally posted by brucethemoose View Post
            After all, they rejected the Intel's eDRAM Broadwell chips,
            Because, even then, it wasn't terribly good. There were bottlenecks in the architecture that kept the GPU from scaling well. So, performance was good, but probably not enough to justify the added price or steer power users away from a dGPU option.

            Originally posted by brucethemoose View Post
            they largely rejected the AMD/Intel hybrid package,
            But that was just weird. And the value-add compared with having a truly separate dGPU was tenuous, at best.

            Originally posted by brucethemoose View Post
            and they reportedly rejected Van Gogh (the Steam Deck chip).
            According to whom? Didn't Valve contract with AMD specifically to make it for them? In those sorts of arrangements, Valve would retain ownership of the IP. At least, that's supposedly how it is with MS and Sony.

            Comment


            • Originally posted by WannaBeOCer View Post

              What's the benefit of DP 2.1 on a gaming monitor when Display Stream Compression is lossless? That supports 4K 240hz, 8K 120Hz and 10K 100hz.

              A RTX 4090 averaged ~150 FPS at 4K and used about the same power as a RX 6950 XT when gaming which has a lower TBP than a RX 7900 XTX: https://tpucdn.com/review/nvidia-gef...wer-gaming.png

              Seems like it's priced accordingly due to it's lower performance than a RTX 4090
              I don't think the nvidia drives support dsc over dp on linux

              Comment


              • Originally posted by brucethemoose View Post

                Plenty of users benefit from GPU performance these days... even if they dont know it.

                The power savings and PCB space savings would be huge in laptops (See: the M1 Pro/Ultra), but I think OEMs just weren't interested in it until the M1 came around.


                After all, they rejected the Intel's eDRAM Broadwell chips, they largely rejected the AMD/Intel hybrid package, and they reportedly rejected Van Gogh (the Steam Deck chip).
                I'm not saying users do not benefit from GPU. I'm saying the concept of a big APU for specifically Windows PC market is questionable at least. For the professional PC GPGPU workloads CUDA is an industry standard, so even Intel or AMD could provide a M1 Pro/Ultra like x86 SoC this would not automatically translate to a success story in this use case. NVIDIA, on the other hand, realistically could only make an ARM based APU which would have it's own problems because of non-x86 ISA.

                I agree that big APU makes sense for gaming laptops, but yet again we face a PC market reality: NVIDIA has a major market share (and mind share) in PC gaming, so a big APU from Intel/AMD for gaming might not be that successful overall, though I think some specific products, like Asus ROG Zephyrus G14 or other 'AMD advantage' series products could be successful as a separate line, but the volume is what drives chip R&D.

                Since PC market is not tightly controller by any one single HW company, I think we going to see these relatively small APUs for some time now. They are sort of optimal for this market: you get a good encoding/decoding engine and some decent light gaming performance from them, and for anything else serious you pair them with big dGPUs. Also they going to get a lot better, 12CU RDNA3 iGPU in 2023 15-28W Phoenix APU for example, is not that small.

                Comment


                • Originally posted by drakonas777 View Post
                  I think we going to see these relatively small APUs for some time now. They are sort of optimal for this market: you get a good encoding/decoding engine and some decent light gaming performance from them, and for anything else serious you pair them with big dGPUs.
                  Intel seems determined to push bigger GPU options into the laptop segment, apparently trying to chip away at the laptop dGPU market. Meteor lake will move the GPU onto its own tile, with one seeming objective being to enable them to pair larger GPU tiles with different CPU core configurations.

                  Comment


                  • Originally posted by verude View Post

                    I don't think the nvidia drives support dsc over dp on linux
                    DSC definitely works on Linux with a Nvidia GPU. If it didn’t I wouldn’t be able to run my 27GN950 at 4K/160Hz on Linux

                    https://github.com/NVIDIA/open-gpu-k...omment-2781922

                    Ah seems like higher refresh rates aren’t supported yet. Hopefully they’ll add support soon.
                    Last edited by WannaBeOCer; 07 November 2022, 06:24 PM.

                    Comment


                    • Originally posted by WannaBeOCer View Post
                      Again it sounds like the AI accelerators in RDNA3 are aimed at inference not training neural networks.
                      Unless these accelerators are operating at extremely low precision (think 1, 2, 4 or 8 bits), there is no substantive difference between "inference" and "training". e.g. to make use of tensor cores in many CUDA operations, your inputs need to be fp16 or (the slightly wider but still lower precision than fp32) tf32 anyhow.

                      Comment

                      Working...
                      X