Announcement

Collapse
No announcement yet.

Mesa 22.1-rc1 AMD Radeon Linux Gaming Performance vs. NVIDIA

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by Volta View Post

    Only no brainer will buy nvidia. It's not only more expensive, but doesn't have proper Linux drivers (closed source mess instead) and it's lagging behind in Wayland support.
    Doesn’t have proper Linux drivers? 7 of the top10 supercomputers use Nvidia Linux drivers. Nvidia GPUs are more capable than AMD cards which is why they’re more expensive. If you’re just buying a gaming GPU go AMD. Wayland introduces input latency, so even if it is supported I’m sure FPS gamers will still prefer x11.

    Now regarding drivers, I’d rather spend a second installing a Nvidia driver manually and have an easy to use settings panel then dig around release notes to see which functionality my current mesa driver supports. Then after digging through release notes setting the xorg config or system environment variable to utilize the new features manually. It was like a night and day difference going from a AMD GPU to a Nvidia GPU. Especially since Valve’s ACO compiler wasn’t out yet. Waiting 10-15 minutes for a game to compile shaders was frustrating. Glad Valve created ACO and Mesa made it it the default compiler in 2020.

    The Linux community expects gamers to switch to Linux for gaming but simple functionality like enabling variable refresh rate has a learning curve for new users when using AMD GPUs. Instead of AMD releasing a new control panel they expect every desktop environment to integrate with the open source drivers. Much more work which is why we only see RandR monitor controls.

    Hopefully Intel follows Nvidia and releases a GUI

    Comment


    • #22
      > still lacking any RTX 3090 Ti

      TBH, who cares, that card is so ridiculously overpriced and power-hungry that there's no good reason to buy it, the only reason it exists is so nvidia can win in dick-measuring contests that ignore efficiency (and for most use cases, the same applies to the non-Ti 3090).

      What I find interesting in these benchmarks is that the Radeon 6700XT is (usually) faster than the Geforce 3070, sometimes even faster than the 3070Ti.
      On the Windows-benchmarks I've seen the Radeon 6700XT is usually around 8-10% slower than the Geforce 3070, closer to (but a little faster than) the 3060Ti.
      Last edited by DanielG; 23 April 2022, 12:24 AM.

      Comment


      • #23
        Originally posted by Michael View Post

        Unfortunately AMD never sent me the RX 6900 series.
        Pity. That would be good advertising and the costs would pay off.

        Comment


        • #24
          This generation nVidia was already mostly coasting on brand recognition and its proprietary CUDA/RT/DLSS.

          And if rumors are to be believed, AMD are far from done. Let's see if it all pans out, but if RDNA3 crushes nVidia on performance while using much less power and using an older manufacturing node, while nVidia tries to push a 600W monstrosity just to compete, that will really be something. But the rumor mill is always juicier than the reality of course.

          Excited to see "the Linux friendly choice" become "the no compromise overall best choice".

          Comment


          • #25
            Originally posted by WannaBeOCer View Post

            Doesn’t have proper Linux drivers? 7 of the top10 supercomputers use Nvidia Linux drivers. Nvidia GPUs are more capable than AMD cards which is why they’re more expensive. If you’re just buying a gaming GPU go AMD.
            Main reason here: CUDA, as CUDA ist pretty unrivaled in some super computing GPU Tasks..
            But AMD is catching up with rocm / HIP. They offer more efficient cards (which is a big topic in HPC / Datacenters, with rising energy costs). Aldebaran (MI200) is ~2-4 times faster than NVIDIAs A100.

            Depending on what you are doing MI200 is way cheaper than A100 regarding "bang for buck" or Teraops especially for larger datatypes like FP64 and FP32.

            Comment


            • #26
              Originally posted by Spacefish View Post

              Main reason here: CUDA, as CUDA ist pretty unrivaled in some super computing GPU Tasks..
              But AMD is catching up with rocm / HIP. They offer more efficient cards (which is a big topic in HPC / Datacenters, with rising energy costs). Aldebaran (MI200) is ~2-4 times faster than NVIDIAs A100.

              Depending on what you are doing MI200 is way cheaper than A100 regarding "bang for buck" or Teraops especially for larger datatypes like FP64 and FP32.
              I'm aware of the reason, it's the reason why I own a Titan RTX and prefer Nvidia. I can install Nvidia's proprietary drivers on any distribution and use a docker container to use CUDA unlike ROCm which is only limited to a few distributions and performance is similar on practically all distros I use.

              The Mi210 would have been a bang for buck if it was readily available which it isn't just like the Mi100 which cost $12,500 at launch, $2500 more expensive than an A100 which was almost instantly available. The H100 was launched and on paper it crushes the Mi210 and it comes with .5x more VRAM. Regarding efficiency I assume you were talking about performance since the A100 PCIe card used 50w less than the Mi100/Mi210.

              9/10 of the top 10 Green500 supercomputers are using A100s.



              Edit: Want to add when I say more capable I am referring to the consumer cards. CDNA is based on GCN which is why compute performance is good. RDNA was built from the ground up for gaming while Nvidia's consumer cards still have deep learning capabilities that slaughter RDNA thanks to their added Tensor cores. The 6900 XT performs like a Radeon VII when using ROCm. Maybe RDNA3 will have some matrix cores?
              Last edited by WannaBeOCer; 23 April 2022, 09:56 PM.

              Comment


              • #27
                Looks like the NVIDIA driver is being more bound by the CPU at lower settings/resolutions???

                Now if we compared raytracing performance, well, its no match vs NV atm.

                Comment


                • #28
                  Still enjoying the shit out of my wee 4650 APU
                  Hi

                  Comment


                  • #29
                    In every single benchmark where minimum FPS is measured the 6800xt beats the 3080 on that metric too. I.e. smoother experience. Great stuff.

                    It's just such a pain that basic stuff like freesync does not work out of the box on Linux in 2022.
                    Ridiculous considering that on windows it has worked for five years without worrying about editing config files or which display server you are using etc.
                    Last edited by humbug; 24 April 2022, 07:33 AM.

                    Comment


                    • #30
                      Originally posted by theriddick View Post
                      Looks like the NVIDIA driver is being more bound by the CPU at lower settings/resolutions???
                      Maybe, but it could also just be that the infinity cache is more effective at lower resolutions and higher resolutions end up stressing the memory bandwidth more.

                      Comment

                      Working...
                      X