Announcement

Collapse
No announcement yet.

AMD Radeon RX 6800 vs. NVIDIA RTX 30 Linux Performance Heating Up

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    Originally posted by pete910 View Post
    It's in the algorithm that a Nvida card goes first if same average Despite the fact that the letter A is at the beginning of the alphabet
    the letters you are interested in are 'x' and 't' (rtx vs rx from row names)

    Comment


    • #52
      Originally posted by Qaridarium View Post
      i bought a HD3850 in 2007 for the AMD opensource driver
      i bought hd4850 in 2008 for amd opensource driver

      Comment


      • #53
        Originally posted by zexelon View Post
        its rapidly looking like RT is going to be the "new hw T&L".
        "rapidly" as in "in a few generations". when you will not have to pay for simultaneously tanking your fps and reducing picture quality

        Comment


        • #54
          Michael
          while the RADV driver continues to default to the AMDGPU back-end
          I think you meant to say ACO back-end here.

          Comment


          • #55
            Originally posted by hiryu View Post
            I haven't bought an AMD GPU in ages... Any recommendations on the brand?
            The exclusive brands usually, Sapphire and XFX. But the level has been raised (and the price 😢) across the board recently, most are pretty good.

            Comment


            • #56
              Originally posted by CochainComplex View Post
              Nice! But what happend to Radeon VII. AFAIK the last time it was en par with the 5700XT.
              These are 4k tests. Radeon VII has higher vram and bandwith and 5700XT is known as a great card for 1440p or lower since launch day

              Comment


              • #57
                Originally posted by zexelon View Post
                That is some awesome information in your post, thanks!
                I must admit though that I am unsure if your conclusion (Nvidia is doomed) is humor or an honest appraisal.
                It is a honest appraisal... if you stop watch at Distractions of Nvidia PR like cuda or optiX or DLSS2 and Nvidia-only Raytracing implementation (to make the people believe there is no AMD raytracing implementation)

                then you go to the cold numbers of Physical facts the 8nm Samsung node is already a failure if you see FPS per Watt
                then you will find yes Nvidia has a very good cooling system but only because you have to cool 30-40watt more for the same FPS.
                this means as soon as you buy a water cooled 6900XT and overclock the 6900XT to 2,7ghz the 3090 is doomed.

                and remember how AMD did nuke intel with a chiplet design the AMD 5950X is a chiplet design of 3 chips.
                the 3090 is still a mono chip without a chiplet design but the 6900XT is already a 2 chip chiplet design because the infinity cache is in fact another die chip. this means AMD does not even have better note they also already have GPU chiplet design.

                this means AMD has more options to improve the design than nvidia. yes Nvidia can buy some 5/7nm note from TSMC and they can put faster ram on it like HBM3 ram... and they already do this for server/workstation highend cards
                but this is very expensive way the chiplet design of AMD keeps the costs low the infinity cache is already faster and lower power consumsion compared to faster Vram. and the point is because of the chiplet design AMD can use outdated 12nm node for the cache... and right now Nvidia did not have chiplet design means they are doomed to make the cache in 6/7/8nm instead of cheap 12nm.

                this means AMD already outsmarted Nvidia... and AMD has more options to make it faster like bigger infinity cache they can switch from GDDR6 to GDDR6x and or HBM . believe it or not AMD does already produce GPU chips at 5nm at TSMC.

                Yes intel and Nvidia now has deals with TSMC for 3/5/7nm to but it is to late AMD has chiplet design means they are doomed anyway.

                now some people believe Nvidia is superior because they have DLSS2 but this is an short time illusion because the 6900XT already has AMD super Resolution in Hardware people believe it has not because the driver does not have it yet.
                yes AMD has tradition of first design hardware and later they write the software for it. as soon as the driver utilize the AMD super resolution hardware Nvidia DLSS2 is doomed because in my knowlege the amd version is faster and better.
                but the AMD version is based on a static algorithm instead of deep learning KI like the DLSS2 because of this people believe the nvidia version is superior. but this is wrong believe you can waste a lot of energy with deep learning and the result can be beaten by a static algorithm this only means Nvidia has not yet found such an algorithm and maybe AMD has patents on it. so maybe Nvidia only does deep learning DLSS2 just because the lost the patent wars on algorithm in hardware like the S3TC patent of texture compression in the past. we will know for sure in the moment amd released the driver for AMD super resolution.
                but even if the amd solution is not as good as DLSS2 i call it a fact that it will consume a lot less "power"
                because deep learning is the worst case of all power draining algorithms .

                this means Nvidia is not only doomed because of chiplet design and nm nodes no in the end their DLSS solution will be doomed to.
                Phantom circuit Sequence Reducer Dyslexia

                Comment


                • #58
                  Originally posted by ernstp View Post
                  Michael

                  I think you meant to say ACO back-end here.
                  It appears Michael was sleeping while typing the article. There are a couple other typos.

                  Comment


                  • #59
                    Originally posted by ernstp View Post
                    Michael

                    I think you meant to say ACO back-end here.
                    Yep thanks
                    Michael Larabel
                    http://www.michaellarabel.com/

                    Comment


                    • #60
                      Originally posted by Qaridarium View Post

                      it is not an unique case... as soon as you do deeper research what kind of implementation the games use you will find that amd wins all raytracing benchmarks with the amd raytracing implementation.

                      why should only nvidia get the benefit of an "nvidia implementation" it does not take 5 brain cells to imagine that it runs slower on amd hardware.

                      i can search you more examples no problem.

                      (Edit) for example Dirt 5 raytracing is also faster on a 6900XT than a 3090

                      https://www.tomshardware.com/news/am...6900-xt-review

                      and it is also AMD raytracing implementation:

                      https://wccftech.com/amd-helped-godf...ly-noticeable/

                      godfall should be faster on the 6900XT to because it is also amd implementation.
                      Correct me if I am wrong but isn't AMD = Vulkan raytracing?
                      .. with Nvidia doing its own thing again similar to G-sync vs FreeSync.

                      Comment

                      Working...
                      X