For those wondering whether or not the Linux 3.2 kernel
will once again up the graphics performance for Intel Sandy Bridge hardware, here are some results.
Over on OpenBenchmarking.org
I uploaded some results last week from an Intel Sandy Bridge system using the latest Mesa 7.12-devel Git and xf86-video-intel DDX while comparing the Intel SNB performance between Linux 3.1 and Linux 3.2
As you can see from those Intel Sandy Bridge results on OpenBenchmarking.org, there really isn't any performance difference on the current 3.2 kernel Git compared to 3.1 final. This comes after a nice performance boost in Linux 3.1
and other performance optimizations that have come to the latest generation of Intel graphics in recent months with the Intel kernel DRM, Mesa (now with GLSL 1.30 support!
and other performance boosts), and xf86-video-intel (namely SNA acceleration
No Intel SNB performance boost under Linux 3.2 isn't much of a surprise based upon the DRM pull for Linux 3.2
, which was relatively boring on the Intel side aside from fixes for the Apple MacBook Air and Red Hat Enterprise Linux.
It's still expected Intel will enable RC6 power-savings
support this cycle, which will conserve greater power when idling but at the same time some interesting results I discovered a few weeks back was that RC6 can also boost the graphics performance too.
Below is an email exchange I had last week with Eugeni Dodonov, one of Intel's newest OSTC employees, concerning the 3.2 kernel and Intel RC6 support under Linux.
yes, I actually wrote about it in some days since we talked [blog], and also discussed with Jesse.
From what we have discussed, such improvements could come from several different paths.
The first explanation is that, with RC6 enabled, the graphical card can actually consume much less power (down to 0V), so it leaves more room for CPU to use the non-claimed power to do more processing of its own.
And the second one is that, thanks to additional thermal bonus which we get from the RC6-provided power economy, the GPU frequency has more room for scaling.
So, in both of those case, you receive some performance boost which arises from spare watts which the gfx card leaves for other components to use. As this extra performance happens on demand (e.g., when you need it - when you need some heavy benchmarks or such), it does not affects the idle behavior. And under stress, when the gfx card is pushed to its limit, it gets some additional FPS as well.
So yes, it results in more power under load from what we've seen, but it is somewhat expected I think.. when you run heavy applications, it is expected to have more power being used to run them. I am looking those weeks on having a userspace interface for better controlling this and other performance-related features (gpu turbo and others).
As for rc6, yes, the idea is still to have rc6 for 3.2, but no ETA yet.. The plan was to try it on rc2 or rc3, after the most urgent requests, so it is still on track.
Also, there weren't much performance-related patches for 3.2 yet, mostly edid and different outputs improvements. If my patch for faster edid detection gets in, it could improve the monitor detection and boot time spent initializing the gfx driver by 30-300% though. But nothing which should be user-visible on benchmarks and 3d workloads as far as I've seen.