Radeon HyperZ R300g Performance

Written by Michael Larabel in Display Drivers on 4 December 2012 at 04:52 PM EST. Page 1 of 5. 29 Comments.

With Marek Olšák having fixed-up the R300 Gallium3D HyperZ support and then finally enabling this performance-boosting technology by default for the vintage Radeon X1000 (R500) series graphics cards, new benchmarks were conducted to look at the performance impact of ATI HyperZ finally being flipped on in this legacy ATI Linux graphics driver.

HyperZ is the ATI/AMD technology that's been around going back to the R100 GPU days for boosting the GPU performance and efficiency. HyperZ consists of Z compression for minimizing the Z-Buffer bandwidth, fast Z clear, and a hierarchical Z-Buffer. Implementing HyperZ in the open-source ATI/AMD Linux graphics drivers just hasn't been a problem for old Radeon hardware but is still a big problem with newer GPUs on the R600 Gallium3D driver.

As of the Mesa Git activity this week, the R500 (Radeon X1000) GPUs have HyperZ support enabled by default while for the R300/R400 series it's still undergoing additional tests before enabling the feature by default. For the pre-R500 hardware, the support can be easily enabled at this time through setting the RADEON_HYPERZ environment variable. During all the benchmarking, swap buffers wait was disabled (as well as the Phoronix Test Suite, as always, automatically disabling the vblank_mode).

Radeon HyperZ R300g Benchmarks

In this article are benchmarks of ATI Radeon X1800XL and ATI Radeon X1800XT graphics cards when running the Git Radeon stack (just not Mesa 9.1-devel as of this week but also the latest xf86-video-ati DDX and Linux 3.7 kernel and libdrm) while running the selection of OpenGL benchmarks with and without HyperZ support enabled for these two R500 GPUs. HyperZ won't benefit all OpenGL workloads on Linux but only for games and applications using much video memory bandwidth and a z-buffer.


Related Articles