Announcement

Collapse
No announcement yet.

AMD Radeon RX 6800 vs. NVIDIA RTX 30 Linux Performance Heating Up

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Mr.I
    replied
    Originally posted by Qaridarium View Post
    Yes intel and Nvidia now has deals with TSMC for 3/5/7nm to but it is to late AMD has chiplet design means they are doomed anyway.
    Really hope Intel can make it though. Intel is the most open source friendly one among these 3 major hardware manufacturers, Nvidia is the one who really needs to die.

    Leave a comment:


  • mppix
    replied
    Originally posted by Qaridarium View Post

    in my knowelege this has nothing to do with API.
    Nvidia does Vulkan Raytracing and AMD does Vulkan Raytracing.
    even with the same API the hardware vendor can implement it different...
    and also even with the same API the game engines can untilize the API different.
    point is: you can write the game engine in a way that it works best with Nvidia hardware but you can also build it in a way that it works best with AMD hardware.
    Are you sure? Nvidia did raytracing before Vulkan had the raytracing extensions .. so I doubt that early Nvidia games used Vulkan..

    Leave a comment:


  • Lightkey
    replied
    Originally posted by Drago View Post

    Yeah, this s*it is CPU bottle necked as hell. One of fastest game on my integrated Ryzen 4650G Vega APU.
    There is a regression since the test in November: https://www.phoronix.com/scan.php?pa...formance&num=2
    Back then, the RX 6800 XT went up to 135 fps in that test and the RTX 3080 had 147 fps! The cards below had about the same results, so it's only the top results that somehow got bottle necked by the CPU, on the same hardware.

    Leave a comment:


  • qarium
    replied
    Originally posted by mppix View Post
    Correct me if I am wrong but isn't AMD = Vulkan raytracing?
    .. with Nvidia doing its own thing again similar to G-sync vs FreeSync.
    in my knowelege this has nothing to do with API.
    Nvidia does Vulkan Raytracing and AMD does Vulkan Raytracing.
    even with the same API the hardware vendor can implement it different...
    and also even with the same API the game engines can untilize the API different.
    point is: you can write the game engine in a way that it works best with Nvidia hardware but you can also build it in a way that it works best with AMD hardware.

    Leave a comment:


  • mppix
    replied
    Originally posted by Qaridarium View Post

    it is not an unique case... as soon as you do deeper research what kind of implementation the games use you will find that amd wins all raytracing benchmarks with the amd raytracing implementation.

    why should only nvidia get the benefit of an "nvidia implementation" it does not take 5 brain cells to imagine that it runs slower on amd hardware.

    i can search you more examples no problem.

    (Edit) for example Dirt 5 raytracing is also faster on a 6900XT than a 3090

    https://www.tomshardware.com/news/am...6900-xt-review

    and it is also AMD raytracing implementation:

    https://wccftech.com/amd-helped-godf...ly-noticeable/

    godfall should be faster on the 6900XT to because it is also amd implementation.
    Correct me if I am wrong but isn't AMD = Vulkan raytracing?
    .. with Nvidia doing its own thing again similar to G-sync vs FreeSync.

    Leave a comment:


  • Michael
    replied
    Originally posted by ernstp View Post
    Michael

    I think you meant to say ACO back-end here.
    Yep thanks

    Leave a comment:


  • tildearrow
    replied
    Originally posted by ernstp View Post
    Michael

    I think you meant to say ACO back-end here.
    It appears Michael was sleeping while typing the article. There are a couple other typos.

    Leave a comment:


  • qarium
    replied
    Originally posted by zexelon View Post
    That is some awesome information in your post, thanks!
    I must admit though that I am unsure if your conclusion (Nvidia is doomed) is humor or an honest appraisal.
    It is a honest appraisal... if you stop watch at Distractions of Nvidia PR like cuda or optiX or DLSS2 and Nvidia-only Raytracing implementation (to make the people believe there is no AMD raytracing implementation)

    then you go to the cold numbers of Physical facts the 8nm Samsung node is already a failure if you see FPS per Watt
    then you will find yes Nvidia has a very good cooling system but only because you have to cool 30-40watt more for the same FPS.
    this means as soon as you buy a water cooled 6900XT and overclock the 6900XT to 2,7ghz the 3090 is doomed.

    and remember how AMD did nuke intel with a chiplet design the AMD 5950X is a chiplet design of 3 chips.
    the 3090 is still a mono chip without a chiplet design but the 6900XT is already a 2 chip chiplet design because the infinity cache is in fact another die chip. this means AMD does not even have better note they also already have GPU chiplet design.

    this means AMD has more options to improve the design than nvidia. yes Nvidia can buy some 5/7nm note from TSMC and they can put faster ram on it like HBM3 ram... and they already do this for server/workstation highend cards
    but this is very expensive way the chiplet design of AMD keeps the costs low the infinity cache is already faster and lower power consumsion compared to faster Vram. and the point is because of the chiplet design AMD can use outdated 12nm node for the cache... and right now Nvidia did not have chiplet design means they are doomed to make the cache in 6/7/8nm instead of cheap 12nm.

    this means AMD already outsmarted Nvidia... and AMD has more options to make it faster like bigger infinity cache they can switch from GDDR6 to GDDR6x and or HBM . believe it or not AMD does already produce GPU chips at 5nm at TSMC.

    Yes intel and Nvidia now has deals with TSMC for 3/5/7nm to but it is to late AMD has chiplet design means they are doomed anyway.

    now some people believe Nvidia is superior because they have DLSS2 but this is an short time illusion because the 6900XT already has AMD super Resolution in Hardware people believe it has not because the driver does not have it yet.
    yes AMD has tradition of first design hardware and later they write the software for it. as soon as the driver utilize the AMD super resolution hardware Nvidia DLSS2 is doomed because in my knowlege the amd version is faster and better.
    but the AMD version is based on a static algorithm instead of deep learning KI like the DLSS2 because of this people believe the nvidia version is superior. but this is wrong believe you can waste a lot of energy with deep learning and the result can be beaten by a static algorithm this only means Nvidia has not yet found such an algorithm and maybe AMD has patents on it. so maybe Nvidia only does deep learning DLSS2 just because the lost the patent wars on algorithm in hardware like the S3TC patent of texture compression in the past. we will know for sure in the moment amd released the driver for AMD super resolution.
    but even if the amd solution is not as good as DLSS2 i call it a fact that it will consume a lot less "power"
    because deep learning is the worst case of all power draining algorithms .

    this means Nvidia is not only doomed because of chiplet design and nm nodes no in the end their DLSS solution will be doomed to.

    Leave a comment:


  • lilunxm12
    replied
    Originally posted by CochainComplex View Post
    Nice! But what happend to Radeon VII. AFAIK the last time it was en par with the 5700XT.
    These are 4k tests. Radeon VII has higher vram and bandwith and 5700XT is known as a great card for 1440p or lower since launch day

    Leave a comment:


  • ernstp
    replied
    Originally posted by hiryu View Post
    I haven't bought an AMD GPU in ages... Any recommendations on the brand?
    The exclusive brands usually, Sapphire and XFX. But the level has been raised (and the price 😢) across the board recently, most are pretty good.

    Leave a comment:

Working...
X