Announcement

Collapse
No announcement yet.

Radeon RX 6900 XT Launches As Flagship Card With Open-Source Drivers But Very Limited Availability

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • qarium
    replied
    Originally posted by mdedetrich View Post
    And its a cache, not actual memory. Its only 128mb, you can't cache everything. There is a reason why the 6900XT is generally losing out in 4k, you have too much data for a proportion of it to always be cached
    The InfinityCache (along with slightly better TSMC's node) is the ONLY reason the RTX 6xxx series is competitive, everything else about the cards hardware wise is weaker. There is nothing magical about InfinityCache, its just a cache similar to what you find on AMD's CPU lineup
    Maybe you should watch the videos and stop talking out of your ass.

    * https://www.youtube.com/watch?v=1ITdex_JrBM
    * https://www.youtube.com/watch?v=nxQ0-QtAtxA

    A massive collection of games were benchmarked including ones which were made in collaboration with AMD (i.e. borderlands)

    Uh, its impossible for a game engine to do this because its completely abstracted away in the exact same way L1/L2/L3 cache is abstracted when compiling code for X86/64 CPU's. In both cases the CPU tries to automatically determine what data gets cached by using heuristics (note for CPU's this can be somewhat mitigated with microcode but that is another discussion completely).

    Especially with game engines, neither Vulkan, OpenGL or DirectX API's give any control of the InfinityCache on AMD's GPU, its completely a black box. Considering that AMD's is the only GPU with a cache, its also highly unlikely that the API's would be adjusted to take this into account in the future.
    So you agree that a 6900XT always win at 2K and 2,5K maybe also 3K this means if you use this as native resolution and you use AMD Fidelity FX Super Resolution to scale up to the 4K or 5K then AMD is always faster. the only problem here is AMD Fidelity FX Super Resolution is not yet ready in software/driver but the hardware is able to do exactly this.

    Also your sources say a 3090 is slower than a 6900YT in rare cases (cherry picking you said).
    Means in most cases at 4K the 3090 according to your youtube video sources it is mostly only 12% faster.
    This means you pay a 50% higher price to get 12% more performance.
    Not a good deal!

    As you said there is 2 ways AMD could even improve their situation one is profiling games and give new microcode: "determine what data gets cached by using heuristics (note for CPU's this can be somewhat mitigated with microcode but that is another discussion completely)."

    The other one as you said is: "neither Vulkan, OpenGL or DirectX API's give any control of the InfinityCache on AMD's GPU, its completely a black box. Considering that AMD's is the only GPU with a cache, its also highly unlikely that the API's would be adjusted to take this into account in the future."
    AMD could chance the Vulkan API to give control over the inifitycache you say it is highly unlikely because it is the only hardware arround yet but in fact this architecture is here to stay and any future AMD hardware will have this architectur as well and i am sure future Nviida produts will have it to. because of this is highly likely that we will have optimized game engines and even optimized new Vulkan API chances to support exactly this.

    Also AMD could easily produce a 6990XTX by incrase the infinity cache to 256mb in 7nm and water cool it and run the same chip at 2700mhz. Your youtube source say the 6900XT is already faster at 2,7ghz with an bigger 256mb infinity cache the 3090 would lose all benchmarks.

    But even without that 3090 is a bad deal you pay 50% higher price for 12% higher performance
    even in best case cenario with nvidia-ray-tracing implementation you get 25% higher performance at 50% higher price.

    also on linux you have opensource drivers with perfect wayland support on AMD side.

    I am as a linux user i would never buy a Nvidia.
    Last edited by qarium; 11 December 2020, 05:13 PM.

    Leave a comment:


  • mdedetrich
    replied
    Originally posted by Qaridarium View Post

    well the infinity cache is much faster memory than GDDR6x...
    And its a cache, not actual memory. Its only 128mb, you can't cache everything. There is a reason why the 6900XT is generally losing out in 4k, you have too much data for a proportion of it to always be cached

    Originally posted by Qaridarium View Post
    yes the 6900XT has only 256bit GDDR6 bus but it is not the only "Bus" of the 6900XT
    means you have to add the infinity cache to the calculation to.
    this means it is up to the game engine to use the 128mb infinity cache very smart.
    The InfinityCache (along with slightly better TSMC's node) is the ONLY reason the RTX 6xxx series is competitive, everything else about the cards hardware wise is weaker. There is nothing magical about InfinityCache, its just a cache similar to what you find on AMD's CPU lineup

    Originally posted by Qaridarium View Post
    and this is the point what makes your generalisation statements a HOAX.

    you count just games just any game any nvidia optimized game who NEVER utilize the infinity cache in the right way. and in this case the 3090 wins for sure.
    Maybe you should watch the videos and stop talking out of your ass.

    * https://www.youtube.com/watch?v=1ITdex_JrBM
    * https://www.youtube.com/watch?v=nxQ0-QtAtxA

    A massive collection of games were benchmarked including ones which were made in collaboration with AMD (i.e. borderlands)

    Originally posted by Qaridarium View Post
    but i tell you if the game engine Utilize the infinity cache correctly it will turn it into the favor of the 6900XT.
    Uh, its impossible for a game engine to do this because its completely abstracted away in the exact same way L1/L2/L3 cache is abstracted when compiling code for X86/64 CPU's. In both cases the CPU tries to automatically determine what data gets cached by using heuristics (note for CPU's this can be somewhat mitigated with microcode but that is another discussion completely).

    Especially with game engines, neither Vulkan, OpenGL or DirectX API's give any control of the InfinityCache on AMD's GPU, its completely a black box. Considering that AMD's is the only GPU with a cache, its also highly unlikely that the API's would be adjusted to take this into account in the future.
    Last edited by mdedetrich; 10 December 2020, 06:52 PM.

    Leave a comment:


  • qarium
    replied
    Originally posted by mdedetrich View Post
    I was talking about raw rasterization (i.e. no raytracing)....
    The 3090 does generally beat the 6900XT, especially in 4k which is probably the reason why you are getting such an expensive card in the first place. This also makes sense since it is better hardware (it has faster memory, more memory and wider bus).
    You should have a look at the reviews and stop talking out of your ass
    well the infinity cache is much faster memory than GDDR6x...
    yes the 6900XT has only 256bit GDDR6 bus but it is not the only "Bus" of the 6900XT
    means you have to add the infinity cache to the calculation to.
    this means it is up to the game engine to use the 128mb infinity cache very smart.

    and this is the point what makes your generalisation statements a HOAX.

    you count just games just any game any nvidia optimized game who NEVER utilize the infinity cache in the right way. and in this case the 3090 wins for sure.

    but i tell you if the game engine Utilize the infinity cache correctly it will turn it into the favor of the 6900XT.

    Leave a comment:


  • mdedetrich
    replied
    Originally posted by Qaridarium View Post

    I am not cherry picking i just make the distinction if some game has nvidia or AMD implementation of a technology.
    I was talking about raw rasterization (i.e. no raytracing).... They are directly comparable here.

    The 3090 does generally beat the 6900XT, especially in 4k which is probably the reason why you are getting such an expensive card in the first place. This also makes sense since it is better hardware (it has faster memory, more memory and wider bus).

    You should have a look at the reviews and stop talking out of your ass.

    I don't what your point here is anyways, both cards have terrible value for money so you are pretty stupid if you get either of them but you are even stupider if you get the 6900XT since its literally a slightly faster 6800XT, at least with 3090 you get a lot more memory (24 gigs of it at GDDR6X no less) which means you can at least use it for content creation/basic modelling with massive textures.

    Also rumors are that NVidia will release a 3080TI which would kill any meager proposition that the 6900XT has.
    Last edited by mdedetrich; 10 December 2020, 06:19 PM.

    Leave a comment:


  • qarium
    replied
    Originally posted by mdedetrich View Post
    Cherry picking benchmarks isn't doing you any favors. There is already a huge collection of reviews from many reputable sites (i.e. gamers nexus/hardware unboxed) and the 6900XT betas the 3090 in ~20-30% of the games tested overall (and this is ignoring ray tracing).
    I am not cherry picking i just make the distinction if some game has nvidia or AMD implementation of a technology. this was same in the time of Tesselation the nvidia implementation was broken and it needed a GPU who support X64 tesselation what doomed all X16 implementationAMD cards.
    back in the Tesselation times the AMD implemenation runs best even on Nvidia hardware many notebook gpus at that time run well on AMD implemenation but run worst on Nvidas own implemenation.
    your point of view is just "INSANE" if you only count games no matter what implementation is used.
    many shitty nvidia implementation game engine games does not make your point more true.
    the 6900XT is in fact faster with and without raytracing if the AMD implemenation is used in the game engine.

    also if you count in OC on 3090 and 6900XT it turns it all around for AMD with water cooler you run the 6900XT at 2700mhz.... and it only use as much power than a regular 3090...

    yes you can count game numbers like a stupid Donkey to come to these conclusion: "and the 6900XT betas the 3090 in ~20-30% of the games tested overall (and this is ignoring ray tracing)."

    but you can do more research and just eliminate any game what use Nvidia implemenation and only count games who use AMD implemenation.

    this means: Technically AMD is better but Nvidia has a more Monopol like marketshare.

    but Marketshare is not an technical argument it is just sapotage of the competition.

    Leave a comment:


  • mdedetrich
    replied
    Originally posted by Qaridarium View Post

    @4K resolution without raytracing the 6900XT is ~10% faster than the 3090
    @4K with the Nvidia implementation of raytracing in the game engine 3090 is 25% faster with 50% higher price.
    @4K with the AMD implementation of raytracing in the game engine the 6900XT is 10% faster than the 3090.
    Cherry picking benchmarks isn't doing you any favors. There is already a huge collection of reviews from many reputable sites (i.e. gamers nexus/hardware unboxed) and the 6900XT betas the 3090 in ~20-30% of the games tested overall (and this is ignoring ray tracing).

    Leave a comment:


  • qarium
    replied
    Originally posted by bridgman View Post
    Thanks for the kind words.
    you really rally deserve the kind words.
    people with water cooler run the 6900XZT at 2700mhz what a insane speed.

    Originally posted by bridgman View Post
    We do have a ProRender plug in for Blender which includes HW ray tracing support - I'm in the process of lining up one of the developers on our side to help get some tests in place that Michael can run for a proper ProRender vs Optix comparison. It's not clear to me right now why everyone tests with Optix but not with ProRender other than the fact NVidia added Optix support to Blender as a new back end while we added it as a plug-in.
    This really sounds strange to me.
    Nvidia is much better in mind control/Propaganda than they are in building a hardware or software implementation.
    It is really sad that mind control works best like in the old day of TV/newspaper only world
    like the internet never happened and alternative information is hard to get.

    but thats it Nvidia is not a technical solution in my point of view they are MK-ULTRA/Mockingbird Operation nothing more.

    Originally posted by bridgman View Post
    Ditto for HIP/CUDA - a lot of the CUDA tests Michael runs have been running on HIP for a while, so hoping we can get those wrapped up for PTS as well.
    thats really insane. MK-ULTRA/Mockingbird...



    It is really like we are in a chinese prison camp and get brainwashed completely.

    Leave a comment:


  • qarium
    replied
    Originally posted by mdedetrich View Post
    Except that the 3090 is not slower than the 6900XT. The 6900XT wins in some games (mainly in WQHD however if you are playing at this resolution the 6900XT is a waste compared to the 6800XT) but overall the 3090 is faster. It also has a lot more VRAM (which runs at faster speed, i.e. GDDR6X) and much better ray tracing.
    Also if you are planning to use the open source drivers and you buy the 3090, I am sorry but you are an idiot. Either use the blob or don't buy the card at all.
    @4K resolution without raytracing the 6900XT is ~10% faster than the 3090
    @4K with the Nvidia implementation of raytracing in the game engine 3090 is 25% faster with 50% higher price.
    @4K with the AMD implementation of raytracing in the game engine the 6900XT is 10% faster than the 3090.

    and if you OC the 6900XT with water cooler it runs at 2700MHZ what gives you another 10%.
    even overclockled at 2700mhz the 6900XT consumes the same power than the 3090

    looks to me like Nvidia is only faster in the case Nvidia does sapotage the competision with their broken Nvidia-Raytracing implementation.

    they did same with their Tesselation implementation and they did same with Nvidia-only OpenGL and they did the same with PysX and now they do it with raytracing again.

    "It also has a lot more VRAM" yes sure 24GB is more than 16GB any kindergarden children can say so
    "(which runs at faster speed, i.e. GDDR6X)" this alone is no point at all.
    "much better ray tracing." well some report this that the Nvidia implemenation looks better on Nvida than on AMD... no wonder why? why not compare the AMD implemenation on Nvidia cards ?

    I tell you something AMD could easily do this: replace the 128mb 12nm infinity cache with 256mb infinity cache in 7nm and then put out a 6990XTX with water cooler @2700mhz.

    and Nvidia would be doomed in all benchmarks
    Last edited by qarium; 10 December 2020, 03:10 PM.

    Leave a comment:


  • bridgman
    replied
    Originally posted by Qaridarium View Post
    for Linux users it is clear someone must be stupid to buy intel/Nvidia.

    i only see 2 rational points left for the Nvidia people: excuse one: my company force me to use and programm in CUDA... and excuse two: i use OptiX in Blender...

    But i am sure in time of Vulkan compute we will soon see a vulkan based Blender backend.

    so Good job in the last 13 years Bridgman/AMD.
    Thanks for the kind words. We do have a ProRender plug in for Blender which includes HW ray tracing support - I'm in the process of lining up one of the developers on our side to help get some tests in place that Michael can run for a proper ProRender vs Optix comparison. It's not clear to me right now why everyone tests with Optix but not with ProRender other than the fact NVidia added Optix support to Blender as a new back end while we added it as a plug-in.

    Ditto for HIP/CUDA - a lot of the CUDA tests Michael runs have been running on HIP for a while, so hoping we can get those wrapped up for PTS as well.

    Leave a comment:


  • mdedetrich
    replied
    Originally posted by pal666 View Post
    not everyone is a novideo slave like you, normal people will not buy slower novideo for 1.5 price(and normal people will ignore your amount of vram nonsense)
    Except that the 3090 is not slower than the 6900XT. The 6900XT wins in some games (mainly in WQHD however if you are playing at this resolution the 6900XT is a waste compared to the 6800XT) but overall the 3090 is faster. It also has a lot more VRAM (which runs at faster speed, i.e. GDDR6X) and much better ray tracing.

    Also if you are planning to use the open source drivers and you buy the 3090, I am sorry but you are an idiot. Either use the blob or don't buy the card at all.

    Leave a comment:

Working...
X