The problem can be solved by setting shaders=false in vdrift test script.
Announcement
Collapse
No announcement yet.
Intel HD 4000 Ivy Bridge Graphics On Linux
Collapse
X
-
@Kivada
Implying S3TC is a performance enhancement. If it's even used, it's often used to double the texture resolution, thus using the same VRAM and no speedup.
@Intel team
Congrats, especially on Nexuiz and Xonotic.
@AMD
Three weeks to Trinity.
Comment
-
AMD vs Intel
I think AMD has faster graphics.
But I think Intel chips are more energy efficient (hence cooler, and more silent) and have open source device drivers.
I'll go with energy-efficient, cool, silent, open source any day over slightly faster graphics.
Comment
-
Originally posted by curaga View Post@Kivada
Implying S3TC is a performance enhancement. If it's even used, it's often used to double the texture resolution, thus using the same VRAM and no speedup.
Originally posted by uid313 View PostI think AMD has faster graphics.
But I think Intel chips are more energy efficient (hence cooler, and more silent) and have open source device drivers.
I'll go with energy-efficient, cool, silent, open source any day over slightly faster graphics.
Comment
-
Originally posted by Kivada View PostIf I'm not mistaken S3TC is used by pretty much all games these days and enabling it actually does improve performance by reducing the amount of memory bandwidth required to load the texture, the game engine is going to try to load the same number of textures weather S3TC is there or not but the GPU is going to choke on the extra bandwidth required to do it, hence why S3TC is still used to this day.
Example:
2048x2048 s3tc tex, 4mb VRAM used
1024x1024 uncompressed tex, 4mb VRAM used
This is because s3tc is lossy, and because the improvement of the sharper texture is more than the degradement of the compression visually. The other reason being that the VRAM overhead of the bigger uncompressed textures may be too much.
Comment
-
Originally posted by uid313 View PostI'll go with energy-efficient, cool, silent, open source any day over slightly faster graphics.
A can of Intel opensource whopass waiting for
your AMD APU to enter legacy (non)support.
After that it's world of hurt with xf86-video-ati.
I don't care how much AMD's hardware is faster
if it gets whooped by Intel when legacy comes,
and you have no other choice but opensource.
Intel, hats (and bucks) off to you
Comment
-
I'm hoping that Michael does a test soon of Llano with /sys/class/drm/card0/device/power_method=profile and power_profile=high, along with the new LLVM r600 backend... and then with 2d tiling, and PCIe 2 support enabled.
There's a lot of optional features which are currently disabled by default in r600g, some of which have major performance implications. I'm especially interested in seeing if the VLIW packetizer in the LLVM back-end helps performance. I've already done piglit runs of the LLVM and TGSI back-ends (both glsl 1.2/1.3), but I haven't done a PTS gaming run with them yet. Unfortunately, my weekend was too short to finish that.
Aside:
I've also started the beginnings of a radeon performance profile-setting GUI as a way to teach myself GTK. I'll be poking around at this one in my spare time over the next few weeks, and hopefully by the end I'll have something to show for it. Currently targeting only radeons (r100+), but if I can find the right sysfs nodes for Nouveau/Intel/others (PTS can probably show me the way here), there's no reason I couldn't handle them all.
Current features targeted: Change CPU/memory clocks/profiles, report temperatures/frequencies. Eventually, maybe add support for setting fan profiles/speeds when applicable. I'll leave DPMS to KDE/Gnome/etc. X.org feature settings (2D tiling, etc) will probably be left out for now, but might be added in the future.
Comment
Comment