Announcement

Collapse
No announcement yet.

AMD Announces Radeon RX 7900 XTX / RX 7900 XT Graphics Cards - Linux Driver Support Expectations

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by AdrianBc View Post

    For Radeon 7900XTX, the performance is 77% of that of RTX 4090, but the price is only 62%, so the performance per dollar is higher by 25% for AMD.
    I hope its closer to 90%. At least in rasterization work. We won't know until release.

    Keep in mind NVIDIA can't even get decent framerates (60-120) at 4k with RT enabled without the help of DLSS2+3 so RT while being cool, isn't really a dominate factor in buying a card just yet. Most people keep such a thing disabled, and very few games have acceptable implementations that offer good fps and decent visual upgrades. (I can't even remember which ones off top of my head)

    Comment


    • Originally posted by Anux View Post
      This has allready happened. If USA hadn't cut them off of TSMC, they would have never bothered to make their own fabs, because thats techically nearly impossible (compete with TSMC) and also financially.
      Now they have their own CPUs and GPUs and 1/4th of the worlds population to sell that shit. As soon as the west makes a mistake, whooshhh china is there to dominate the market with no dependency on the west.
      Yes, that´s the cost of protectionism. In the short run you support your own industry, in the long run you loose out as your comptetitor now has a motivation to build their own. Once the competitors compete with your local companies, some might start to implement hight import tarrifs, which might lead to stalled development inside the own country, as their is no longer any competition from outside.

      I guess in the GPU/Accelerator front china is way behind, but in the chip manufacturing + SoC Development, they will compete in ~5 years.. They are currently developing their own EUV process + they are pushing a lot of money into that. There are also some RISC-V SoCs like the TH1520 (Quadcore 2,5 Ghz C910 RISC-V Design) and enablement work in Android AOSP recently (initial patches developed by Alibaba got merged upstream last month).
      I guess we will see the first entry level mass market phones / notebooks / desktop PCs based on the TH1520 in mid->end 2023.
      They have most of the tech needed now in-country, Alibaba makes SoCs based on RISC-V Huawei/HiSilicon makes RF-Chip Designs for 802.11ax and 5G Networks, SMIC can produce the chips. They have a 7nm process which is comparable to the first TSMC 7nm now in mass production.

      They are only lacking a good GPU Design for embeeded devices and a lot of software.

      Comment


      • Originally posted by Anux View Post
        USAs sanction war to protect their own companys has a long history, I remember back in the 90 when I was much more into politics, they allready played that game.
        Do tell.

        Originally posted by Anux View Post
        This has allready happened. If USA hadn't cut them off of TSMC, they would have never bothered to make their own fabs, because thats techically nearly impossible (compete with TSMC) and also financially.
        Complete rubbish. China has been investing in its own fabs for decades. That's how long it takes to develop such expertise. I'm sure that once Huawei got banned from TSMC, it created a new urgency and upped their investment, but probably only accelerated their development by a rather small amount.

        Originally posted by Anux View Post
        Now they have their own CPUs and GPUs and 1/4th of the worlds population to sell that shit.
        Again, that was already happening. Anything the US did merely accelerated it by a couple years.

        Originally posted by Anux View Post
        Do you still believe what politicians tell you? The reasons for almost all their wars where lies. If you want to get a feel for the real reasons you need to follow the money.
        People like to have simplistic explanations for things. Usually, there's not a single, simple explanation for such complex things. Just because you can see what appears to be a financial interest doesn't mean it factored heavily (or even at all) into the decision.

        Comment


        • Originally posted by albatorsk View Post
          I'm a long time Geforce user and I'm seriously considering getting a new RX 7900, as I especially like AMD's stance on open source, but I am utterly confused about the driver situation. I'm using Ubuntu and I'm used only have one driver to install (nvidia-driver-###) and then I'm all set. What's messing with my mind is the bit below:



          To an outsider like it it seems like there are several different drivers, or combination of drivers. Will I (most likely) need to upgrade to a newer kernel than what's included in Ubuntu 22.10 by default? What is "the RADV Vulkan driver". How does it relate to "RadeonSI Gallium3D", if at all? How do I figure out what I should use? Can both be installed at the same time? Do they provide the same functionality? Is RADV required for Vulkan? Does that driver also support OpenGL for all non-Vulkan titles? There's also something called AMDGPU and AMDGPU-PRO. How do they fit in with all this?

          Or am I just overthinking all this, and all I have to do is plop in an AMD graphics card and it'll just work?
          Maybe this could help: https://ibb.co/Lpjzzwp

          Sorry I forgot who created this on reddit.

          Comment


          • Originally posted by WannaBeOCer View Post
            AMD made some comment about their RDNA3 cards being more future proof. I disagree as we continue to see ray tracing adoption these cards will have longer longevity. Similar to how the GTX 400 series performance out lived the HD5000/6000 series due to the adoption of tessellation.
            About the latter statement, ATI/AMD were the first to implement tessellation, Nvidia abused their implementation by intentionally crippling its competition (Remember Crysis 3 with the excessive use of tesselation on static object?). As the real time ray-tracing in gaming without upscale technique, we are still not there yet. Let's wait for the tests of RDNA3, the very first MCM videocard, to see the improvement compared to 6950XT which I have. On a side, someone pointed RTX 4900 is basically a Titan card.

            You have to remember Nvidia’s consumer cards are GPGPUs while with RDNA, AMD switched to making gaming GPUs.
            Very interesting how the role switched for both companies during last years.

            It’s very rare to see an AMD RDNA card mentioned in publications. While you’ll see plenty of RTX cards used. Same with content creation thanks to Nvidia’s Studio Drivers. At the end of the day Nvidia’s GPUs are targeting multiple markets while AMD with their poor performance in the other markets with their consumer cards are restricted to just the gaming market.
            You just highlighted Nvidia main advantage: better software, documentation and higher budget. AMD took a hit with the mess that was ROCm (which badly neglected APU for years) although the improvement occurred than to open source contributors and AMD employees. Unsurprisingly, content creator software makers focused on Nvidia for those reasons during these times. AMD effort on release open source drivers as possible slowly comes to fruition as the ROCm HIP improved as application like Davinci Resolves now run smoothly on 6950XT as an example. You will see RDNA series will get adoption in content creation once kink in software drivers is ironed out.
            Last edited by finalzone; 06 November 2022, 12:49 AM.

            Comment


            • Originally posted by finalzone View Post
              About the latter statement, ATI/AMD were the first to implement tessellation, Nvidia abused their implementation by intentionally crippling its competition (Remember Crysis 3 with the excessive use of tesselation on static object?). As the real time ray-tracing in gaming without upscale technique, we are still not there yet. Let's wait for the tests of RDNA3, the very first MCM videocard, to see the improvement compared to 6950XT which I have. On a side, someone pointed RTX 4900 is basically a Titan card.


              Very interesting how the role switched for both companies during last years.


              You just highlighted Nvidia main advantage: better software, documentation and higher budget. AMD took a hit with the mess that was ROCm (which badly neglected APU for years) although the improvement occurred than to open source contributors and AMD employees. Unsurprisingly, content creator software makers focused on Nvidia for those reasons during these times. AMD effort on release open source drivers as possible slowly comes to fruition as the ROCm HIP improved as application like Davinci Resolves now run smoothly on 6950XT as an example. You will see RDNA series will get adoption in content creation once kink in software drivers is ironed out.
              I’m aware AMD released the first DX11 capable cards with the HD5000 series but the cards lacked enough tessellation units. Can’t blame developers for developing for a specific hardware. AMD tried the same crap recently with Godfall that required 12GB of VRAM to try to sabotage the 10GB RTX 3080.

              AMD may run into this issue soon as we’re seeing fully path traced demos by Nvidia like Racer RTX. Without DLSS, current titles are capable of 100 FPS+ at 1440p with a RTX 4090 or 4K/60 fps. Developers are just going to add more ray traced objects as the hardware improves year after year.

              https://www.techpowerup.com/review/n...dition/34.html

              Aside from Nvidia’s software their hardware is also incredible. I consider Turing as revolutionary as Fermi which was the first computing architecture. Due to the introduction of RT/Tensor cores. You keep bringing up software but even with the introduction of HIP it doesn’t change the fact that RT/Tensor cores accelerate OptiX along with many other task. The RTX 3090/4090 aren’t Titan cards since they lack Titan drivers and optimizations. It’s the reason we still see the Titan RTX outperforming both in some task. It’s no surprise AMD followed with their introduction of Ray Accelerators in RDNA2 and now AI accelerators in RDNA3.

              Gamers make fun of Fermi but it changed how research was done. It was also one of the largest leaps in regards to deep learning. It only took 6 days to train AlexNet using Fermi GPUs.

              AMD always wants to be first at everything, Nvidia has been testing MCM designs for a few years now: https://research.nvidia.com/publicat...ce-scalability

              Comment


              • Originally posted by WannaBeOCer View Post
                In rendering/deep learning, the RX 6000 series was getting wrecked by Nvidia's mid-range GPUs which cost less than the RX 6900/6950. We still have no clue how well the 7000 series does in deep learning.
                AMD said 2.7x, which I'm pretty sure is relative to the RX 6950XT. Given that fp32 is like 2.6x as much, that's not very impressive.

                Originally posted by WannaBeOCer View Post
                At first I thought AMD's AI accelerators were their matrix cores from CDNA but reading other articles it seems like they're just inference designed cores to accelerator FSR3.
                I'm sure they can do more than FSR3. Nvidia has a nice secondary business for their gaming GPUs, selling them as Tesla cards (although the Tesla name has been dropped) intended largely for general-purpose inference workloads, and I'll bet AMD wants to do the same.

                Originally posted by WannaBeOCer View Post
                I bought a Titan RTX back in 2018 due to the Titan drivers and unlocked tensor core performance while their Geforce cards are limited to 50% performance. $2500 seems like a lot for a GPU but it paid off since I've had that performance since 2018. This can also be the case for ray traced games as well.
                Are the ray tracing units of their gaming GPUs also cut back, or you're saying this is why someone would pay more for a Nvidia card?

                I agree that if someone really wanted a good ray tracing experience, they should get a Nvidia card. It certainly does have some allure for me, but luckily I don't currently have time to dabble with such things.

                Comment


                • Originally posted by finalzone View Post
                  AMD effort on release open source drivers as possible slowly comes to fruition as the ROCm HIP improved as application like Davinci Resolves now run smoothly on 6950XT as an example. You will see RDNA series will get adoption in content creation once kink in software drivers is ironed out.
                  I'm sure it helps that AMD is selling its gaming GPUs also as workstation GPUs. Because CDNA have no graphics or display capability, AMD has no real choice but to use RDNA for this, which means their compute support should eventually solidify.

                  https://www.amd.com/en/graphics/workstations

                  Comment


                  • Originally posted by WannaBeOCer View Post
                    AMD always wants to be first at everything, Nvidia has been testing MCM designs for a few years now: https://research.nvidia.com/publicat...ce-scalability
                    That doesn't count, since it's a compute architecture. Many compute problems have somewhat more data locality than graphics. That's why AMD had a compute-oriented GPU with multiple compute dies in the MI200-series, but their RDNA 3 still uses a monolithic compute/rendering die.

                    If we're honest, Apple won the race to mass-produce the first true multi-die graphics processor, with its M1 Ultra.

                    Comment


                    • ALRBP parityboy jamdox piotrj3 Gps4life darkbasic tajjada geearf

                      Huge thank you to all of you!

                      Comment

                      Working...
                      X