Originally posted by mdedetrich
View Post
Also your sources say a 3090 is slower than a 6900YT in rare cases (cherry picking you said).
Means in most cases at 4K the 3090 according to your youtube video sources it is mostly only 12% faster.
This means you pay a 50% higher price to get 12% more performance.
Not a good deal!
As you said there is 2 ways AMD could even improve their situation one is profiling games and give new microcode: "determine what data gets cached by using heuristics (note for CPU's this can be somewhat mitigated with microcode but that is another discussion completely)."
The other one as you said is: "neither Vulkan, OpenGL or DirectX API's give any control of the InfinityCache on AMD's GPU, its completely a black box. Considering that AMD's is the only GPU with a cache, its also highly unlikely that the API's would be adjusted to take this into account in the future."
AMD could chance the Vulkan API to give control over the inifitycache you say it is highly unlikely because it is the only hardware arround yet but in fact this architecture is here to stay and any future AMD hardware will have this architectur as well and i am sure future Nviida produts will have it to. because of this is highly likely that we will have optimized game engines and even optimized new Vulkan API chances to support exactly this.
Also AMD could easily produce a 6990XTX by incrase the infinity cache to 256mb in 7nm and water cool it and run the same chip at 2700mhz. Your youtube source say the 6900XT is already faster at 2,7ghz with an bigger 256mb infinity cache the 3090 would lose all benchmarks.
But even without that 3090 is a bad deal you pay 50% higher price for 12% higher performance
even in best case cenario with nvidia-ray-tracing implementation you get 25% higher performance at 50% higher price.
also on linux you have opensource drivers with perfect wayland support on AMD side.
I am as a linux user i would never buy a Nvidia.
Leave a comment: