Announcement

Collapse
No announcement yet.

Ubuntu 12.10: Open-Source Radeon vs. AMD Catalyst Performance

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Ubuntu 12.10: Open-Source Radeon vs. AMD Catalyst Performance

    Phoronix: Ubuntu 12.10: Open-Source Radeon vs. AMD Catalyst Performance

    With Ubuntu trying to improve their OpenGL driver support state to push the Linux OS as a platform for gaming, Valve going to be promoting the closed-source NVIDIA and AMD drivers on Linux, and various other challenges still turning up for those trying to use the different Linux OpenGL drivers, here are some new benchmarks comparing the open-source Radeon Gallium3D driver against the closed-source AMD Catalyst driver.

    http://www.phoronix.com/vr.php?view=18088

  • smitty3268
    replied
    Originally posted by marek View Post
    I am going to say something you will not like....

    If we had S3TC support by default, we wouldn't be getting such low framerates, because S3TC uses 1/4 to 1/8 of memory needed for ordinary RGB8/RGBA8 textures. All textures would fit in VRAM and we would get full speed. While we may try to fight back and implement better heuristics, so that more textures can be in VRAM and not in RAM, we will never be able to get close to the performance of Catalyst. 1/8 of used memory also means that only 1/8 of bandwidth is needed.

    So if you want the open driver to be more comparable to Catalyst, get S3TC support.

    There seem to be exceptions though. Reaction Quake prints this: "...ignoring GL_EXT_texture_compression_s3tc" Anybody knows why?
    Why don't you do like Intel does, and provide S3TC support by default then?

    They do without the S3TC lib, and even provide compression by returning a generic compressed texture instead of S3TC.

    Sure, it's not 100% spec compliant, but it works in 99.9% of games, and isn't that what matters?

    And if Ian Romanick is ok with that for Intel, i'm really surprised the other drivers haven't done the same thing.
    Last edited by smitty3268; 11-04-2012, 02:37 AM.

    Leave a comment:


  • .CME.
    replied
    Originally posted by marek View Post
    [...]
    There seem to be exceptions though. Reaction Quake prints this: "...ignoring GL_EXT_texture_compression_s3tc" Anybody knows why?
    yeah, compressed textures are disabled as default in that game.

    Leave a comment:


  • Kano
    replied
    @marek

    Maybe it is similar to the Doom 3/Id Tech 4 engine that will use uncompressed textures in Ultra quality mode. I could really see the compression artefacts in the bfg variant with all textured precompressed used by the optimizied engine. So from a quality point of view uncompressed looks a bit sharper but of course a normal user would use a lower setting for more speed (often just the default) and get compressed textures anyway.

    Leave a comment:


  • Craig73
    replied
    Originally posted by marek View Post
    If we had S3TC support by default, we wouldn't be getting such low framerates...
    It thought S2TC compression gave good (not great) results, is open, and is backwards compatible with S3TC, no?

    Could you not just implement S3TC compression interfaces using S2TC by default. This would address the speed concern at a small (perhaps generally unnoticeable loss in quality).

    With interfaces not patentable, could pre-compressed S3TC textures just be passed through for the card to handle [for those apps that provide them / thus no loss in quality in those cases]

    Users who are super concerned about quality and not constrained by patents could turn on the true S3TC library as an alternative (or perhaps the patent would finally be confirmed to be invalid).

    Leave a comment:


  • curaga
    replied
    Yep, at the cost of quality. And this leads game devs to use 2x the resolution for the S3TC-compressed textures, creating a net zero VRAM change. (512x512 uncompressed to 1024x1024 compressed, for example).

    Not in all cases of course, but this is rather common.

    Leave a comment:


  • marek
    replied
    I am going to say something you will not like....

    If we had S3TC support by default, we wouldn't be getting such low framerates, because S3TC uses 1/4 to 1/8 of memory needed for ordinary RGB8/RGBA8 textures. All textures would fit in VRAM and we would get full speed. While we may try to fight back and implement better heuristics, so that more textures can be in VRAM and not in RAM, we will never be able to get close to the performance of Catalyst. 1/8 of used memory also means that only 1/8 of bandwidth is needed.

    So if you want the open driver to be more comparable to Catalyst, get S3TC support.

    There seem to be exceptions though. Reaction Quake prints this: "...ignoring GL_EXT_texture_compression_s3tc" Anybody knows why?

    Leave a comment:


  • 0xBADCODE
    replied
    IMHO AMD should pay more attention to open driver. Learn from Intel, dudes

    I think AMD should learn the lesson from Intel guys who did a really good job with their drivers and consider open driver to be a 1st class citizen. Binary blob is doomed to have a dozens of problems in Linux and will always be very unwelcome and troublesome part of system. So AMD and Nvidia shouldn't get surprised when they lose market share to Intel in this part of market as well. As for, Catalyst has been always causing nasty issues here and there. OTOH opensource driver works very well for me. But it's slower and lacks some features. Most notably, OpenCL and recent OpenGLs.

    Leave a comment:


  • pingufunkybeat
    replied
    Originally posted by elanthis View Post
    Examples of such features recently brought up right on this very site are texture tiling, hierarchical Z-buffering, higher PCI transfer speeds, and power state control.
    Like Alex says, most of this is finished for radeon cards. Perhaps not turned off by default on most distros, but it's been written.

    And yes, optimizing shaders can bring huge gains, but Quake3 doesn't need much of that, and it's still slower.

    You can also easily measure where the bottleneck lies. If your CPU is pegging 100%, the driver (or the app itself) is the bottleneck. If the GPU is pegging 100%, it's the hardware. Any recent tests I've seen have shown that the CPU usage with the FOSS drivers is not all that terrible (though not great) and yet the FPS is extremely worse. Clearly, then, the GPU hardware is the bottleneck; in this case, not because the hardware itself is bad, but because it's running in a race with its hands tied behind its back and one leg chopped off.
    I don't know much about GPU drivers to argue with you about this. It will depend on how often the GPU has to wait for the driver to finish doing its stuff, and I can think of scenarios where this induces delays even when CPU is far from 100% load, I get this regularly when running OpenCL software. But like I said, I'm not a driver guy, so I have no clue how it is really done.

    Leave a comment:


  • ChrisXY
    replied
    Originally posted by Veerappan View Post
    I think that this has possibly been fixed due to a KWin bug.

    https://bugs.freedesktop.org/show_bug.cgi?id=55998
    Fair enough, that's not intel's fault.

    I accidentally clicked on the adblock button in chromium.
    Code:
    [70796.656970] [drm:i915_hangcheck_hung] *ERROR* Hangcheck timer elapsed... GPU hung

    Leave a comment:

Working...
X