The problem can be solved by setting shaders=false in vdrift test script.
Larabel never does corrections...
Originally Posted by log0
Implying S3TC is a performance enhancement. If it's even used, it's often used to double the texture resolution, thus using the same VRAM and no speedup.
Congrats, especially on Nexuiz and Xonotic.
Three weeks to Trinity.
AMD vs Intel
I think AMD has faster graphics.
But I think Intel chips are more energy efficient (hence cooler, and more silent) and have open source device drivers.
I'll go with energy-efficient, cool, silent, open source any day over slightly faster graphics.
If I'm not mistaken S3TC is used by pretty much all games these days and enabling it actually does improve performance by reducing the amount of memory bandwidth required to load the texture, the game engine is going to try to load the same number of textures weather S3TC is there or not but the GPU is going to choke on the extra bandwidth required to do it, hence why S3TC is still used to this day.
Originally Posted by curaga
Both have fully OSS drivers, if you care about silent it doesn't even mater, just use some really good aftermarket cooling on either and you'll never hear anything http://silentpcreview.com/ and http://frostytech.com/ are really good resources for parting out good cooling and how to build specifically for silence without letting things get warm or ending up with noises normally masked by the sound of the airflow suddenly becoming annoyingly noticeable, high pitched whines, case vibration resonance and whatnot.
Originally Posted by uid313
That's true, but only if you compare textures of the same resolution. As I said, the common practise is for the s3tc-compressed texture to be double the resolution.
Originally Posted by Kivada
2048x2048 s3tc tex, 4mb VRAM used
1024x1024 uncompressed tex, 4mb VRAM used
This is because s3tc is lossy, and because the improvement of the sharper texture is more than the degradement of the compression visually. The other reason being that the VRAM overhead of the bigger uncompressed textures may be too much.
Why bother comparing to the radeon Gallium driver if you're going to run it at stock (=low) speed? Yes AMD should finally fix and by default enable dynamic power management in the open drivers, but this comparison is pointless.
I'm hoping that Michael does a test soon of Llano with /sys/class/drm/card0/device/power_method=profile and power_profile=high, along with the new LLVM r600 backend... and then with 2d tiling, and PCIe 2 support enabled.
There's a lot of optional features which are currently disabled by default in r600g, some of which have major performance implications. I'm especially interested in seeing if the VLIW packetizer in the LLVM back-end helps performance. I've already done piglit runs of the LLVM and TGSI back-ends (both glsl 1.2/1.3), but I haven't done a PTS gaming run with them yet. Unfortunately, my weekend was too short to finish that.
I've also started the beginnings of a radeon performance profile-setting GUI as a way to teach myself GTK. I'll be poking around at this one in my spare time over the next few weeks, and hopefully by the end I'll have something to show for it. Currently targeting only radeons (r100+), but if I can find the right sysfs nodes for Nouveau/Intel/others (PTS can probably show me the way here), there's no reason I couldn't handle them all.
Current features targeted: Change CPU/memory clocks/profiles, report temperatures/frequencies. Eventually, maybe add support for setting fan profiles/speeds when applicable. I'll leave DPMS to KDE/Gnome/etc. X.org feature settings (2D tiling, etc) will probably be left out for now, but might be added in the future.
Why on earth is everything disabled that you want that others test?