Announcement

Collapse
No announcement yet.

30-way Intel/AMD/NVIDIA Linux 2D Performance Comparison

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • jakubo
    replied
    id really like to see if SNA yields a much more of an overhead for being this big.
    maybe one could add CPU workloads (and frequencies) id really like to know what they bought their 2D performance with. and if its really worth it. (ok i think there are not many cases where all cpu cores are fully utilised but i could imagine, that with with multiple executions of these tests in parallel things might change due to CPU limitations then, wouldnt they? (IF (!) things were not CPU limited now - if they are it would mean that SNA uses some kind of cunning preparation and branching of paths if my understanding is right)

    Leave a comment:


  • ickle
    replied
    Originally posted by schmidtbag View Post
    Considering how AMD and Intel were roughly performing the same on average, and considering how pretty much all AMD GPUs performed the same, I don't think CPU is a bottleneck. When it comes to 2D performance, I feel like once you breach a certain point in terms of total calculation performance, the only way to get any faster is by tweaking drivers or reducing latency. Since intel's IGP is on the same silicon as the northbridge, I figure that would have an IMMENSE latency drop, hence intel overall performing slightly better. Also, SDRAM likely has considerably lower latencies than VRAM in a discrete GPU, which in itself would be another reason for intel getting a lead.

    It seems to me we've pretty much reached the limits of 2D performance, which is nice.
    Sorry, but the reason is that the benchmarks are CPU limited. They are limited by the benchmark saturating a single CPU. At least Intel is... In other words, these benchmarks are at their limit for determining the differences between drivers and GPUs and are not that representative of 2D workloads.

    Leave a comment:


  • schmidtbag
    replied
    Originally posted by TAXI View Post
    It's interesting to see different cards from the same vendor performing almost identical. My guess is that's because the operations are CPU limited. It would be nice to see if glamor changes that, especially as the one card tested performs almost identical to EXA.
    Considering how AMD and Intel were roughly performing the same on average, and considering how pretty much all AMD GPUs performed the same, I don't think CPU is a bottleneck. When it comes to 2D performance, I feel like once you breach a certain point in terms of total calculation performance, the only way to get any faster is by tweaking drivers or reducing latency. Since intel's IGP is on the same silicon as the northbridge, I figure that would have an IMMENSE latency drop, hence intel overall performing slightly better. Also, SDRAM likely has considerably lower latencies than VRAM in a discrete GPU, which in itself would be another reason for intel getting a lead.

    It seems to me we've pretty much reached the limits of 2D performance, which is nice.

    Leave a comment:


  • dungeon
    replied
    I thought nouveau was somehow on par in 2D tests with radeons (non si) because both use exa, but i was wrong ... seems like nouveau again suffer because of reclocking .

    Leave a comment:


  • Danny3
    replied
    Congrats Intel, better driver is more important than better hardware.

    Leave a comment:


  • zanny
    replied
    Originally posted by mmstick View Post
    On QGears2, why is XRender used instead of OpenGL?
    It was a 2D comparison. OpenGL isn't quite 2D acceleration.

    Leave a comment:


  • mmstick
    replied
    On QGears2, why is XRender used instead of OpenGL?

    Leave a comment:


  • zanny
    replied
    These results just show how great of an idea GLAMOR is. A full time developer and you only get around 10% performance gains at best over the lowest end AMD card. And there is no performance difference between EXA and GLAMOR on AMD hardware (barely 5% difference as the poster above shows).

    And we won't even be using 2d in a year or two. Once Wayland lands the whole desktop will be OGL based anyway, and GLAMOR proves good enough to provide the backwards compatibility we will need while letting developers work on important hard problems like OGL4 compliance and performance tuning rather than ancient 2D APIs that have lost reason for existing.

    Leave a comment:


  • Pontostroy
    replied
    Originally posted by TAXI View Post
    It's interesting to see different cards from the same vendor performing almost identical. My guess is that's because the operations are CPU limited. It would be nice to see if glamor changes that, especially as the one card tested performs almost identical to EXA.
    http://openbenchmarking.org/result/1...KH-1406076KH06
    6770 glamor/exa

    Leave a comment:


  • carewolf
    replied
    Originally posted by TAXI View Post
    It's interesting to see different cards from the same vendor performing almost identical. My guess is that's because the operations are CPU limited. It would be nice to see if glamor changes that, especially as the one card tested performs almost identical to EXA.
    If there is no acceleration implemented for a given operation they fall back to CPU. Many of the drivers then have different quality of CPU fallback.

    Leave a comment:

Working...
X