Announcement

Collapse
No announcement yet.

AMD Radeon RX 6800 vs. NVIDIA RTX 30 Linux Performance Heating Up

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • hiryu
    replied
    Originally posted by Qaridarium View Post

    no... you think to one-dimensional... but the topic is multibe dimensional...

    yes the 6900XT is like 25% slower in Nvidia implementation of Raytracing
    but if you use the AMD implemenation of Raytracing the AMD 6900XT is 10-15% faster than nvidia.

    so yes the AMD raytracing implementation is the AMD Specific Optimisation.

    but now you think like all games will use the nvidia implementation... WRONG!!!

    Why wrong?.. well playstation5 and xbox use the AMD implementation because of this many games will use the AMD impementation.
    Absolutely! I am aware of all of this, and I should have been more clear in my previous post.

    If AMD's ray tracing implementation is faster than the RTX 20 series' in cases where both are optimized for their respective platforms... I can live with that, and I will likely pull the trigger.

    Presently, I'm searching the web to find such comparisons right now. If anyone has good ones to share, I'd appreciate it!

    Leave a comment:


  • qarium
    replied
    Originally posted by hiryu View Post
    I'm currently trying to determine where ray tracing performance is... Lower than the RTX 30 series of course. But what about compared to the RTX 20 series? I see the RTX 2080 Ti pulling ahead at least in some games... but perhaps the AMD simply isn't a full drop-in replacement for ray tracing and some AMD specific optimizations are needed..?
    no... you think to one-dimensional... but the topic is multibe dimensional...

    yes the 6900XT is like 25% slower in Nvidia implementation of Raytracing
    but if you use the AMD implemenation of Raytracing the AMD 6900XT is 10-15% faster than nvidia 3090.

    so yes the AMD raytracing implementation is the AMD Specific Optimisation.

    but now you think like all games will use the nvidia implementation... WRONG!!!

    Why wrong?.. well playstation5 and xbox use the AMD implementation because of this many games will use the AMD impementation.
    Last edited by qarium; 08 February 2021, 04:05 PM.

    Leave a comment:


  • hiryu
    replied
    Originally posted by vein View Post

    Hmm... I think Sapphire is the best brand for AMD, their Nitro cards are really Good.
    I remember I used to buy Sapphire back in the day and seemed pretty good. Thanks!


    I'm currently trying to determine where ray tracing performance is... Lower than the RTX 30 series of course. But what about compared to the RTX 20 series? I see the RTX 2080 Ti pulling ahead at least in some games... but perhaps the AMD simply isn't a full drop-in replacement for ray tracing and some AMD specific optimizations are needed..?

    Leave a comment:


  • qarium
    replied
    Originally posted by zexelon View Post
    Hey as an Nvidia teamster, I will give AMD kudos on this one! If you are strictly looking at gaming, the AMD 6K series is very impressive (sans Ray Tracing). No point in comparing them to Nvidia on availability as getting any one of these cards these days is brutal.
    It is nice to see some competition from team red finally! Hopefully they can get their act together for the ray tracing in the next gen (7k series?) as its rapidly looking like RT is going to be the "new hw T&L".
    I think AMD world on 4-5 things:

    die shring to 5nm TCSM for the GPU core
    HBM3 modells with 32-64GB vram for high paying customers like apple
    increase the 128mb infinity cache to 256MB and maybe go from 12nm to 12nm-LP+ or 7nm-
    GLOBALFOUNDRIES (GF), the world's leading specialty foundry, today announced its most advanced FinFET solution, 12LP+, has completed technology qualification and is ready for production. GF's differentiated 12LP+ solution is optimized for artificial intelligence (AI) training and inference applicati...

    Also Global Foundries invest to double the 12nm capacity (FD-SOI)


    right now 5nm and 7nm is complete overloaded without any capacity free so the 256bit infinity cache chip will be FD-SOI 12LP+ nm

    I think AMD also workd on OoO Out of Order execution shaders.


    Multi-GPU-Chiplet design for VR gaming... with 2 gpu cores for each eye one gpu and 2 infinity cache and HBM3 vram.

    as you can see Nvidia is doomed...

    Leave a comment:


  • vein
    replied
    Originally posted by hiryu View Post
    I haven't bought an AMD GPU in ages... Any recommendations on the brand? I don't buy MSI anything as a general rule, but otherwise I'm probably open-minded to most others.

    Too bad no 6900 xt Vs 3090 results here... Definitely curious to know what shape the drivers are in there for the 6900 xt under Linux.

    Michael, hopefully you will eventually get your hands on a 6900 xt! Would have been nice to also see 3090 results here as the results suggest the 6800 xt under Linux wouldn't have it far behind in most cases.

    I'm definitely not getting less than a 6800 xt.
    Hmm... I think Sapphire is the best brand for AMD, their Nitro cards are really Good.

    Leave a comment:


  • f0rmat
    replied
    Originally posted by Volta View Post
    AMD, take no prisoners. You just destroyed this ugly blob. Congratulation Open Source developers!
    What would REALLY be nice for all of us is if AMD success in open source causes nVidia to open up more. nVidia is a private company and can do as it sees fit, but it does have stock holders and needs to be profitable. AMD right now cannot meet the hardware demand, but if it does and gaming results continue their improvement and ROCm or some other compute (preferably open source) that works on AMD begins to compete with CUDA then it might be a whole new game - with all of us being the winners. Competition is a wonderful thing and we as consumers benefit the most.

    I do not see anything like this happening overnight, and AMD has stumbled before, but it is good to see AMD's fighting spirit in challenging two behemoths at once - Intel on the x86-64 side and nVidia on the GPU side. Competition is good and is - like linux - all about choice and we as consumers are the ultimate winners.

    Leave a comment:


  • zexelon
    replied
    Hey as an Nvidia teamster, I will give AMD kudos on this one! If you are strictly looking at gaming, the AMD 6K series is very impressive (sans Ray Tracing). No point in comparing them to Nvidia on availability as getting any one of these cards these days is brutal.

    It is nice to see some competition from team red finally! Hopefully they can get their act together for the ray tracing in the next gen (7k series?) as its rapidly looking like RT is going to be the "new hw T&L".

    Leave a comment:


  • Volta
    replied
    AMD, take no prisoners. You just destroyed this ugly blob. Congratulation Open Source developers!

    Leave a comment:


  • Drago
    replied
    Originally posted by darkbasic View Post

    Source?
    Phoronix: Mesa Continues With More Optimizations For Workstation OpenGL Performance Well known AMD open-source driver developer Marek Olšák continues squeezing Mesa for every bit of possible performance, which in recent months has been with a seemingly workstation focus... http://www.phoronix.com/scan.php?page=news_item&

    Leave a comment:


  • qarium
    replied
    it is really like a event from another dimension AMD really did it AMD won all benchmarks against a Nvidia 3080.
    in germany a 6900XT is at 1399€ and a 3090 is at 1999€ means you save 600€ on a AMD card.

    on geizhals.de there is no result for a 3080
    but a 3070 is at 829€ and a 6800 is at 949€

    in my point of view a linux customer need to be stupid to buy Nvidia.

    yes they claim OptiX for blender and Cuda... but soon we will find out Vulkan Compute is all we need
    and AMD has a raytracing renderer for Blender it is just a pluck in and not a backend.

    computer industry revolution is happening.

    i bought a HD3850 in 2007 for the AMD opensource driver but yes i admit we had to wait to 2020 that is really become a better option than closed source nvidia.

    but yes... now i am sure Nvidia will go down in computer history
    just imagine a 6900XT shrinked to 5nm TCSM and instead of only 128mb infinity cache a 256bit infinity cache.

    or even a HBM3 version. right now it is 100% sure Nvidia is doomed.

    Leave a comment:

Working...
X