Announcement

Collapse
No announcement yet.

AMD Announces Radeon RX 7900 XTX / RX 7900 XT Graphics Cards - Linux Driver Support Expectations

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • WannaBeOCer
    replied
    Originally posted by coder View Post
    That's not accurate. Nvidia still sells plenty of the gaming GPUs for use in servers. What they did was to create the 100-tier of dies that's HPC-oriented. And then, the Titan cards were (up until Titan RTX) where they would cross over and sell it into the gaming market. However, that ended with the A100, which lacks the silicon to make a decent gaming GPU (but you could still allegedly put it on a 3D graphics card, unlike AMD's CDNA processors).

    The NVIDIA data center platform is the world’s most adopted accelerated computing solution, deployed by the largest supercomputing centers and enterprises. Whether you're looking to solve business problems in deep learning and AI, HPC, graphics, or virtualization in the data center or at the edge, NVIDIA GPUs provide the ideal solution. Now, you can realize breakthrough performance with fewer, more powerful servers, while driving faster time to insights and reducing costs.


    Basically, all of their products not ending in 100 are gaming GPUs just repurposed for server use.


    I wouldn't equate training with content creation. AMD sells workstation cards that are usable for content creation.

    https://www.amd.com/en/graphics/workstations
    The Titans used the workstation(Quadro )dies which lack FP64 units and HBM. They haven’t used a 100-die in their workstation(Quadro)/gaming GPUs since Maxwell except the Titan V which was special.

    AMD’s can be used for training and content creation but perform slower due to lack of fixed function hardware. RT/Tensor cores accelerate OptiX which is the reason why lately there has been a massive gap.

    https://github.com/ROCmSoftwarePlatf...ment-991679054

    https://www.phoronix.com/review/blender-33-nvidia-amd​
    Last edited by WannaBeOCer; 06 November 2022, 01:37 PM.

    Leave a comment:


  • coder
    replied
    Originally posted by WannaBeOCer View Post
    Nvidia separated their gaming architecture and server cards years ago. AMD already does the same with RDNA, CDNA.
    That's not accurate. Nvidia still sells plenty of the gaming GPUs for use in servers. What they did was to create the 100-tier of dies that's HPC-oriented. And then, the Titan cards were (up until Titan RTX) where they would cross over and sell it into the gaming market. However, that ended with the A100, which lacks the silicon to make a decent gaming GPU (but you could still allegedly put it on a 3D graphics card, unlike AMD's CDNA processors).

    The NVIDIA data center platform is the world’s most adopted accelerated computing solution, deployed by the largest supercomputing centers and enterprises. Whether you're looking to solve business problems in deep learning and AI, HPC, graphics, or virtualization in the data center or at the edge, NVIDIA GPUs provide the ideal solution. Now, you can realize breakthrough performance with fewer, more powerful servers, while driving faster time to insights and reducing costs.


    Basically, all of their products not ending in 100 are gaming GPUs just repurposed for server use.

    Originally posted by WannaBeOCer View Post
    The difference is Nvidia still targets deep learning/content creation on their consumer cards. Again it sounds like the AI accelerators in RDNA3 are aimed at inference not training neural networks.
    I wouldn't equate training with content creation. AMD sells workstation cards that are usable for content creation.

    Leave a comment:


  • WannaBeOCer
    replied
    Originally posted by coder View Post
    AMD said 2.7x, which I'm pretty sure is relative to the RX 6950XT. Given that fp32 is like 2.6x as much, that's not very impressive.


    I'm sure they can do more than FSR3. Nvidia has a nice secondary business for their gaming GPUs, selling them as Tesla cards (although the Tesla name has been dropped) intended largely for general-purpose inference workloads, and I'll bet AMD wants to do the same.


    Are the ray tracing units of their gaming GPUs also cut back, or you're saying this is why someone would pay more for a Nvidia card?

    I agree that if someone really wanted a good ray tracing experience, they should get a Nvidia card. It certainly does have some allure for me, but luckily I don't currently have time to dabble with such things.
    Not sure if Nvidia limits their RT cores performance in their GeForce lineup like Tensor cores. Nvidia set an artificial bandwidth cap on their GeForce cards Tensor cores bandwidth. Similar to how AMD/Nvidia would cap FP64 on consumer architectures that had the same of FP64 units as their server cards.

    Nvidia separated their gaming architecture and server cards years ago. AMD already does the same with RDNA, CDNA. The difference is Nvidia still targets deep learning/content creation on their consumer cards. Again it sounds like the AI accelerators in RDNA3 are aimed at inference not training neural networks.

    GP100, GP102
    GV100, TU102
    GA100, GA102
    GH100, AD102

    Leave a comment:


  • albatorsk
    replied
    ALRBP parityboy jamdox piotrj3 Gps4life darkbasic tajjada geearf

    Huge thank you to all of you!

    Leave a comment:


  • coder
    replied
    Originally posted by WannaBeOCer View Post
    AMD always wants to be first at everything, Nvidia has been testing MCM designs for a few years now: https://research.nvidia.com/publicat...ce-scalability
    That doesn't count, since it's a compute architecture. Many compute problems have somewhat more data locality than graphics. That's why AMD had a compute-oriented GPU with multiple compute dies in the MI200-series, but their RDNA 3 still uses a monolithic compute/rendering die.

    If we're honest, Apple won the race to mass-produce the first true multi-die graphics processor, with its M1 Ultra.

    Leave a comment:


  • coder
    replied
    Originally posted by finalzone View Post
    AMD effort on release open source drivers as possible slowly comes to fruition as the ROCm HIP improved as application like Davinci Resolves now run smoothly on 6950XT as an example. You will see RDNA series will get adoption in content creation once kink in software drivers is ironed out.
    I'm sure it helps that AMD is selling its gaming GPUs also as workstation GPUs. Because CDNA have no graphics or display capability, AMD has no real choice but to use RDNA for this, which means their compute support should eventually solidify.

    Leave a comment:


  • coder
    replied
    Originally posted by WannaBeOCer View Post
    In rendering/deep learning, the RX 6000 series was getting wrecked by Nvidia's mid-range GPUs which cost less than the RX 6900/6950. We still have no clue how well the 7000 series does in deep learning.
    AMD said 2.7x, which I'm pretty sure is relative to the RX 6950XT. Given that fp32 is like 2.6x as much, that's not very impressive.

    Originally posted by WannaBeOCer View Post
    At first I thought AMD's AI accelerators were their matrix cores from CDNA but reading other articles it seems like they're just inference designed cores to accelerator FSR3.
    I'm sure they can do more than FSR3. Nvidia has a nice secondary business for their gaming GPUs, selling them as Tesla cards (although the Tesla name has been dropped) intended largely for general-purpose inference workloads, and I'll bet AMD wants to do the same.

    Originally posted by WannaBeOCer View Post
    I bought a Titan RTX back in 2018 due to the Titan drivers and unlocked tensor core performance while their Geforce cards are limited to 50% performance. $2500 seems like a lot for a GPU but it paid off since I've had that performance since 2018. This can also be the case for ray traced games as well.
    Are the ray tracing units of their gaming GPUs also cut back, or you're saying this is why someone would pay more for a Nvidia card?

    I agree that if someone really wanted a good ray tracing experience, they should get a Nvidia card. It certainly does have some allure for me, but luckily I don't currently have time to dabble with such things.

    Leave a comment:


  • WannaBeOCer
    replied
    Originally posted by finalzone View Post
    About the latter statement, ATI/AMD were the first to implement tessellation, Nvidia abused their implementation by intentionally crippling its competition (Remember Crysis 3 with the excessive use of tesselation on static object?). As the real time ray-tracing in gaming without upscale technique, we are still not there yet. Let's wait for the tests of RDNA3, the very first MCM videocard, to see the improvement compared to 6950XT which I have. On a side, someone pointed RTX 4900 is basically a Titan card.


    Very interesting how the role switched for both companies during last years.


    You just highlighted Nvidia main advantage: better software, documentation and higher budget. AMD took a hit with the mess that was ROCm (which badly neglected APU for years) although the improvement occurred than to open source contributors and AMD employees. Unsurprisingly, content creator software makers focused on Nvidia for those reasons during these times. AMD effort on release open source drivers as possible slowly comes to fruition as the ROCm HIP improved as application like Davinci Resolves now run smoothly on 6950XT as an example. You will see RDNA series will get adoption in content creation once kink in software drivers is ironed out.
    I’m aware AMD released the first DX11 capable cards with the HD5000 series but the cards lacked enough tessellation units. Can’t blame developers for developing for a specific hardware. AMD tried the same crap recently with Godfall that required 12GB of VRAM to try to sabotage the 10GB RTX 3080.

    AMD may run into this issue soon as we’re seeing fully path traced demos by Nvidia like Racer RTX. Without DLSS, current titles are capable of 100 FPS+ at 1440p with a RTX 4090 or 4K/60 fps. Developers are just going to add more ray traced objects as the hardware improves year after year.

    The NVIDIA GeForce RTX 4090 Founders Edition offers huge gains over its predecessors. It's the first graphics card to get you 4K 60 FPS with ray tracing enabled, and upscaling disabled. Do you prefer 120 FPS instead of 60? Just turn on DLSS 3.


    Aside from Nvidia’s software their hardware is also incredible. I consider Turing as revolutionary as Fermi which was the first computing architecture. Due to the introduction of RT/Tensor cores. You keep bringing up software but even with the introduction of HIP it doesn’t change the fact that RT/Tensor cores accelerate OptiX along with many other task. The RTX 3090/4090 aren’t Titan cards since they lack Titan drivers and optimizations. It’s the reason we still see the Titan RTX outperforming both in some task. It’s no surprise AMD followed with their introduction of Ray Accelerators in RDNA2 and now AI accelerators in RDNA3.

    Gamers make fun of Fermi but it changed how research was done. It was also one of the largest leaps in regards to deep learning. It only took 6 days to train AlexNet using Fermi GPUs.

    AMD always wants to be first at everything, Nvidia has been testing MCM designs for a few years now: https://research.nvidia.com/publicat...ce-scalability

    Leave a comment:


  • finalzone
    replied
    Originally posted by WannaBeOCer View Post
    AMD made some comment about their RDNA3 cards being more future proof. I disagree as we continue to see ray tracing adoption these cards will have longer longevity. Similar to how the GTX 400 series performance out lived the HD5000/6000 series due to the adoption of tessellation.
    About the latter statement, ATI/AMD were the first to implement tessellation, Nvidia abused their implementation by intentionally crippling its competition (Remember Crysis 3 with the excessive use of tesselation on static object?). As the real time ray-tracing in gaming without upscale technique, we are still not there yet. Let's wait for the tests of RDNA3, the very first MCM videocard, to see the improvement compared to 6950XT which I have. On a side, someone pointed RTX 4900 is basically a Titan card.

    You have to remember Nvidia’s consumer cards are GPGPUs while with RDNA, AMD switched to making gaming GPUs.
    Very interesting how the role switched for both companies during last years.

    It’s very rare to see an AMD RDNA card mentioned in publications. While you’ll see plenty of RTX cards used. Same with content creation thanks to Nvidia’s Studio Drivers. At the end of the day Nvidia’s GPUs are targeting multiple markets while AMD with their poor performance in the other markets with their consumer cards are restricted to just the gaming market.
    You just highlighted Nvidia main advantage: better software, documentation and higher budget. AMD took a hit with the mess that was ROCm (which badly neglected APU for years) although the improvement occurred than to open source contributors and AMD employees. Unsurprisingly, content creator software makers focused on Nvidia for those reasons during these times. AMD effort on release open source drivers as possible slowly comes to fruition as the ROCm HIP improved as application like Davinci Resolves now run smoothly on 6950XT as an example. You will see RDNA series will get adoption in content creation once kink in software drivers is ironed out.
    Last edited by finalzone; 06 November 2022, 12:49 AM.

    Leave a comment:


  • geearf
    replied
    Originally posted by albatorsk View Post
    I'm a long time Geforce user and I'm seriously considering getting a new RX 7900, as I especially like AMD's stance on open source, but I am utterly confused about the driver situation. I'm using Ubuntu and I'm used only have one driver to install (nvidia-driver-###) and then I'm all set. What's messing with my mind is the bit below:



    To an outsider like it it seems like there are several different drivers, or combination of drivers. Will I (most likely) need to upgrade to a newer kernel than what's included in Ubuntu 22.10 by default? What is "the RADV Vulkan driver". How does it relate to "RadeonSI Gallium3D", if at all? How do I figure out what I should use? Can both be installed at the same time? Do they provide the same functionality? Is RADV required for Vulkan? Does that driver also support OpenGL for all non-Vulkan titles? There's also something called AMDGPU and AMDGPU-PRO. How do they fit in with all this?

    Or am I just overthinking all this, and all I have to do is plop in an AMD graphics card and it'll just work?
    Maybe this could help: https://ibb.co/Lpjzzwp

    Sorry I forgot who created this on reddit.

    Leave a comment:

Working...
X