IIRC, gnome-shell uses a fair amount of glGetPixels which isn't currently accelerated. However, it should be possible to use the new blit code to accelerate it.
Put a 1000 wallpaper images in /usr/share/backgrounds and open up the desktop preferences. You'll be waiting a while as it loops through it's load/scale algorithm.
Just saying Gnomes got slag in the welds.
If so, I really hope they just fix this soon. There are much faster ways of doing this. They take a little more work, but obviously if performance matters (it should) the work is worth it. Using the GPU for picking is a neat trick, but it's one of the ones that belongs more in academia than in real-world code, at least on current architectures. Point-to-polygon collision is not particularly difficult nor expensive (even with more advanced collision culling algorithms, which themselves are only really necessary if the 2D scene has a large number of clickable regions). If it's possible in a high-end real-time simulation, it's possible in a low-level 2.5D UI framework. I'm sure it seemed easier and cheaper to just use the GPU trick, but if it's being a problem... fix it.
At the very least have a software-only fallback for systems where GPU picking is obviously too slow, or GPU memory is too precious for any extraneous FBOs, or cases where the pickable objects are few enough that the raw I/O and GPU context switch overhead of GPU picking swamps the simple transformations and collision detection algorithm execution time. The vast majority of useful 2D elements are rectangles, which even with basic non-crazy transformations end up being trapezoids, and anything more complex than that is probably not something that needs to be (or even should be) clickable anyway. Likewise, pixel-perfect picking is totally unnecessary; any UI that actually requires that is a UI I don't ever want to have to use.
... and if the glGetPixels is being used for something else legitimate, ignore me.
Thanks for stepping in, nevermind I stopped replying to the idiot some time ago. So long as the information is there so other random visitors to the thread don't end up seeing his inane rambling and thinking it's truth (which is, sadly, most likely what happened to him on some other forum(s) in the past and is how he likely came to accumulate and believe in so much bullshit; and that's why letting idiots go on without correction is irresponsible and negligent to the community as a whole), I'm happy. Well, I'm less irritated than I was before, anyways.And it no longer matters who is right or wrong, but that it really isn't relevant.
From what I read they were using the alpha color to determine what your clicking on. (I think it was alpha).
It's a neat trick if it is accelerated, esp on small hardware like netbooks, but when it isnt accellerated it causes really weird things to happen. But I still don't think that is the cause of the exceeding poor performance under UMS. I think it could possibly be that it falls back to software rastering. I'm not 100% sure if that is possible.. but that is what it felt like. Any body know more about the clutter api and how it chooses renderers?
Anyway I'm looking forward to 7.9, i'm running 7.8git builds and they rock so hard it is not funny.
Just saw this commit today:
radeon/r200/r600: enable HW accelerated gl(Read/Copy/Draw)Pixels
I'm just wondering what this means. I saw another commit yesterday that implemented the gl*Pixels functions, but they weren't enabled in that commit.
Will this dramatically increase performance across the board, or increase compatibility? Or is it just something specific to certain edge cases?
I find it funny, Radeon users always live in the future
When we had 2.6.32 with the 6.12.1 driver we were waiting for 2.6.33 to get kms working. Now that 2.6.33 and 6.12.5 is released we wait for 2.6.34 and 6.13 to be released to get dri2 support.
Anyway it is a good thing! Also I get to see all the improvements from not being able to properly render images in 2.6.31 to maybe in the future have proper 3d support and crossfire.
Well, I know glxgears isn't the best benchmark, but DRI2/KMS is now close to the old UMS fps number in Debian sidux with drm/xf86-ati/mesa from git, using the latest 2.6.33 kernel. Previously, KMS wasn't throttling my CPU at all, and now it throttles a bit and the glxgears score is much closer to the UMS days (instead of being half the speed). I think the devs eliminated at least one of the bottlenecks in the recent commits - good work, folks!
BTW, RV710/RadeonHD 4550 here for those curious.