Originally posted by baffledmollusc
View Post
Announcement
Collapse
No announcement yet.
Radeon HD 7950 vs. GeForce GTX 680 On Linux
Collapse
X
-
-
I also think that NVidia compute apps generally run a lot faster in CUDA mode than OpenCL.
I'm not sure if the app developers have just spent more time optimizing CUDA, or if it's something NVidia hasn't done much of yet. But I don't think you can just say that the AMD 79xx > NVidia at compute.
Leave a comment:
-
Originally posted by baffledmollusc View PostUnder Windows, the 680 is clearly a vastly better card in all respects. Michael's benchmarks suggest that, under linux, the 7950 outdoes the 680.
What to take from this? I don't know. AMD has better linux drivers? Nvidia's turbo mode not functioning under linux, as suggested by chithanh?
Leave a comment:
-
Originally posted by Luke_Wolf View PostUh No.. I don't know where you got that idea. Other than in a few games that are well known to be Nvidia focused, Radeon 79xx hardware actually was on par with or better than their Nvidia equivalents, though a wash in general in terms of gaming performance. Now what is true is that the architecture of the 79xx series strongly beat both Kepler and Fermi in OpenCL work, but that's about it for one or the other having the crown for this generation.
So we end up with this:
Windows:
Graphics : Nvidia ~ AMD
Compute: Nvidia < AMD
Linux:
Graphics: Nvidia ~ AMD
Computer: Nvidia < AMD
2D: Nvidia > AMD
They tested a large number of games, so of which were well-known to do well with Nvidia hardware, and some that were known to do well with AMD hardware. The 680 was (roughly) 0%-50% faster than the 7950 (the BF3 numbers are ridiculous), and often outdid the 7970.
Under Windows, the 680 is clearly a vastly better card in all respects. Michael's benchmarks suggest that, under linux, the 7950 outdoes the 680.
What to take from this? I don't know. AMD has better linux drivers? Nvidia's turbo mode not functioning under linux, as suggested by chithanh?Last edited by baffledmollusc; 04 June 2012, 08:45 PM.
Leave a comment:
-
Well the shared memory can however also improve speed. For games like rage with lots of textures loaded on the fly it is not that slow. but certainly when you always preload all textures when a level starts you dont see that effect.
Leave a comment:
-
Originally posted by nightmarex View PostYeah maybe we don't need that kind of power on Linux... or Windows.... I for one think it's great that we can still get the info. BTW am I the only one who can't wait for AMD To stuff a 7950 into a APU?
1. Large die size, which generates heat and requires advanced cooling, and which can't be done with an affordable CPU on the same die.
2. Memory bandwidth (GDDR5 at high VRAM clock rates) which can't be done with much slower DDR3 system memory, even overclocked in the 2000+ MHz range. Although quad channel a la Sandy Bridge EP could probably help, but I don't think AMD is doing quad channel on any of their CPUs yet. Also the dedicated-ness of the VRAM prevents noisy CPU operations from keeping the memory controller busy with things like AI processing in games -- a dedicated card isn't impacted by that, but an integrated chip is.
3. Power consumption, for which most existing CPU sockets are very limited (it's also limited by how much the motherboard and PSU are designed to support). The TDP of "high end" processors is still under 200W, while I've seen certain GPUs under load consume close to 500W! And typical is about 250W for a heavy load -- just for the GPU, mind you, and the APU has to also cover CPU functionality at the same time, with a MUCH smaller power budget.
4. Board size; the supporting board around the GPU on a discrete card can implement a lot of functions that don't have to sit on the GPU die itself, freeing up room in the die for things that require extremely high bandwidth, like the shader cores. The "board" (the long piece that takes up 75% of the length of the card) has things like voltage regulators, memory modules and graphics headers for DisplayPort, etc. But an integrated chip has to either put this stuff on an already-crowded motherboard, or try to cram it into the CPU/APU and still get reasonable general purpose computing performance.
I would probably be disappointed in whatever they ultimately offer up for GCN in an APU, because of these factors holding back its top-end performance. You may not be able to use all that power in HoN, but there are applications (current and upcoming) which can use it, and it's needed. Not to mention, as wine becomes faster and supports more apps, you can start to run high-end Windows games under emulation, which probably adds a performance penalty, so it's nice to have a high end card to power through it.
Leave a comment:
-
Certainly wish I could OC my nVidia card. Great benchmarks, surprised ATI is doing so well these days!
As for the need of games, loads of people actually use CX/Wine, so the need is there
Leave a comment:
-
Originally posted by johnc View PostInteresting.
I must have read something different when the 680 came out.
Leave a comment:
-
That resume is incorrect, it does not consider wine. Try to play rage using fglrx and with nvidia and you will see the difference. in most cases nvidia works best for wine and fglrx often fails.
Leave a comment:
Leave a comment: