Page 3 of 5 FirstFirst 12345 LastLast
Results 21 to 30 of 47

Thread: Radeon HD 7950 vs. GeForce GTX 680 On Linux

  1. #21
    Join Date
    Jan 2008
    Posts
    206

    Default Impressive!

    Impressive how well the Radeon 7950 performs (considering its not even the 7970!) for OpenGL workloads.
    I wish it would do a bit better for 2D workload however...

  2. #22
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,381

    Default

    Quote Originally Posted by Qaridarium View Post
    and then in 2013 bridgman will claim this as his "victory" in improving the opensource support but his moves are very trivial a hamster would do the same for food.
    Did someone say "food" ?


  3. #23
    Join Date
    Jun 2011
    Posts
    787

    Default

    Quote Originally Posted by johnc View Post
    This one's a bit of a head-scratcher.

    The benchmarks done on other sites seemed to indicate that the 680 clearly performed better.
    Uh No.. I don't know where you got that idea. Other than in a few games that are well known to be Nvidia focused, Radeon 79xx hardware actually was on par with or better than their Nvidia equivalents, though a wash in general in terms of gaming performance. Now what is true is that the architecture of the 79xx series strongly beat both Kepler and Fermi in OpenCL work, but that's about it for one or the other having the crown for this generation.

    So we end up with this:

    Windows:
    Graphics : Nvidia ~ AMD
    Compute: Nvidia < AMD

    Linux:
    Graphics: Nvidia ~ AMD
    Computer: Nvidia < AMD
    2D: Nvidia > AMD

  4. #24
    Join Date
    Aug 2007
    Posts
    6,607

    Default

    That resume is incorrect, it does not consider wine. Try to play rage using fglrx and with nvidia and you will see the difference. in most cases nvidia works best for wine and fglrx often fails.

  5. #25
    Join Date
    May 2011
    Posts
    1,426

    Default

    Quote Originally Posted by Luke_Wolf View Post
    Uh No.. I don't know where you got that idea.
    Interesting.

    I must have read something different when the 680 came out.

  6. #26
    Join Date
    Jun 2011
    Posts
    787

    Default

    Quote Originally Posted by johnc View Post
    Interesting.

    I must have read something different when the 680 came out.
    Well to be fair, a lot of review sites were focusing on Nvidia optimized workloads, which of course meant that Nvidia would come out on top for them. But what we see here is actually pretty much similar to what the big picture of what the reviews themselves (not the reviewers) were actually saying.

  7. #27
    Join Date
    Dec 2007
    Posts
    677

    Default

    Certainly wish I could OC my nVidia card. Great benchmarks, surprised ATI is doing so well these days!

    As for the need of games, loads of people actually use CX/Wine, so the need is there

  8. #28
    Join Date
    Sep 2008
    Posts
    989

    Default

    Quote Originally Posted by nightmarex View Post
    Yeah maybe we don't need that kind of power on Linux... or Windows.... I for one think it's great that we can still get the info. BTW am I the only one who can't wait for AMD To stuff a 7950 into a APU?
    Shrinking down GCN to fit into an APU will inevitably kill most of the performance. A lot of what makes discrete cards blazingly fast is their:

    1. Large die size, which generates heat and requires advanced cooling, and which can't be done with an affordable CPU on the same die.

    2. Memory bandwidth (GDDR5 at high VRAM clock rates) which can't be done with much slower DDR3 system memory, even overclocked in the 2000+ MHz range. Although quad channel a la Sandy Bridge EP could probably help, but I don't think AMD is doing quad channel on any of their CPUs yet. Also the dedicated-ness of the VRAM prevents noisy CPU operations from keeping the memory controller busy with things like AI processing in games -- a dedicated card isn't impacted by that, but an integrated chip is.

    3. Power consumption, for which most existing CPU sockets are very limited (it's also limited by how much the motherboard and PSU are designed to support). The TDP of "high end" processors is still under 200W, while I've seen certain GPUs under load consume close to 500W! And typical is about 250W for a heavy load -- just for the GPU, mind you, and the APU has to also cover CPU functionality at the same time, with a MUCH smaller power budget.

    4. Board size; the supporting board around the GPU on a discrete card can implement a lot of functions that don't have to sit on the GPU die itself, freeing up room in the die for things that require extremely high bandwidth, like the shader cores. The "board" (the long piece that takes up 75% of the length of the card) has things like voltage regulators, memory modules and graphics headers for DisplayPort, etc. But an integrated chip has to either put this stuff on an already-crowded motherboard, or try to cram it into the CPU/APU and still get reasonable general purpose computing performance.

    I would probably be disappointed in whatever they ultimately offer up for GCN in an APU, because of these factors holding back its top-end performance. You may not be able to use all that power in HoN, but there are applications (current and upcoming) which can use it, and it's needed. Not to mention, as wine becomes faster and supports more apps, you can start to run high-end Windows games under emulation, which probably adds a performance penalty, so it's nice to have a high end card to power through it.

  9. #29
    Join Date
    Aug 2007
    Posts
    6,607

    Default

    Well the shared memory can however also improve speed. For games like rage with lots of textures loaded on the fly it is not that slow. but certainly when you always preload all textures when a level starts you dont see that effect.

  10. #30

    Default

    Quote Originally Posted by Luke_Wolf View Post
    Uh No.. I don't know where you got that idea. Other than in a few games that are well known to be Nvidia focused, Radeon 79xx hardware actually was on par with or better than their Nvidia equivalents, though a wash in general in terms of gaming performance. Now what is true is that the architecture of the 79xx series strongly beat both Kepler and Fermi in OpenCL work, but that's about it for one or the other having the crown for this generation.

    So we end up with this:

    Windows:
    Graphics : Nvidia ~ AMD
    Compute: Nvidia < AMD

    Linux:
    Graphics: Nvidia ~ AMD
    Computer: Nvidia < AMD
    2D: Nvidia > AMD
    Not sure this is correct - one of the most comprehensive Windows reviews I saw was Anand's: http://www.anandtech.com/show/5699/n...x-680-review/1

    They tested a large number of games, so of which were well-known to do well with Nvidia hardware, and some that were known to do well with AMD hardware. The 680 was (roughly) 0%-50% faster than the 7950 (the BF3 numbers are ridiculous), and often outdid the 7970.

    Under Windows, the 680 is clearly a vastly better card in all respects. Michael's benchmarks suggest that, under linux, the 7950 outdoes the 680.

    What to take from this? I don't know. AMD has better linux drivers? Nvidia's turbo mode not functioning under linux, as suggested by chithanh?
    Last edited by baffledmollusc; 06-04-2012 at 08:45 PM.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •