Well, I know glxgears isn't the best benchmark, but DRI2/KMS is now close to the old UMS fps number in Debian sidux with drm/xf86-ati/mesa from git, using the latest 2.6.33 kernel. Previously, KMS wasn't throttling my CPU at all, and now it throttles a bit and the glxgears score is much closer to the UMS days (instead of being half the speed). I think the devs eliminated at least one of the bottlenecks in the recent commits - good work, folks!
BTW, RV710/RadeonHD 4550 here for those curious.
Announcement
Collapse
No announcement yet.
Open-Source ATI R600/700 Mesa 3D Performance
Collapse
X
-
I find it funny, Radeon users always live in the future
When we had 2.6.32 with the 6.12.1 driver we were waiting for 2.6.33 to get kms working. Now that 2.6.33 and 6.12.5 is released we wait for 2.6.34 and 6.13 to be released to get dri2 support.
Anyway it is a good thing! Also I get to see all the improvements from not being able to properly render images in 2.6.31 to maybe in the future have proper 3d support and crossfire.
Leave a comment:
-
Just saw this commit today:
radeon/r200/r600: enable HW accelerated gl(Read/Copy/Draw)Pixels
I'm just wondering what this means. I saw another commit yesterday that implemented the gl*Pixels functions, but they weren't enabled in that commit.
Will this dramatically increase performance across the board, or increase compatibility? Or is it just something specific to certain edge cases?
Leave a comment:
-
From what I read they were using the alpha color to determine what your clicking on. (I think it was alpha).
It's a neat trick if it is accelerated, esp on small hardware like netbooks, but when it isnt accellerated it causes really weird things to happen. But I still don't think that is the cause of the exceeding poor performance under UMS. I think it could possibly be that it falls back to software rastering. I'm not 100% sure if that is possible.. but that is what it felt like. Any body know more about the clutter api and how it chooses renderers?
Anyway I'm looking forward to 7.9, i'm running 7.8git builds and they rock so hard it is not funny.
Leave a comment:
-
Originally posted by agd5f View PostIIRC, gnome-shell uses a fair amount of glGetPixels which isn't currently accelerated. However, it should be possible to use the new blit code to accelerate it.
If so, I really hope they just fix this soon. There are much faster ways of doing this. They take a little more work, but obviously if performance matters (it should) the work is worth it. Using the GPU for picking is a neat trick, but it's one of the ones that belongs more in academia than in real-world code, at least on current architectures. Point-to-polygon collision is not particularly difficult nor expensive (even with more advanced collision culling algorithms, which themselves are only really necessary if the 2D scene has a large number of clickable regions). If it's possible in a high-end real-time simulation, it's possible in a low-level 2.5D UI framework.I'm sure it seemed easier and cheaper to just use the GPU trick, but if it's being a problem... fix it.
At the very least have a software-only fallback for systems where GPU picking is obviously too slow, or GPU memory is too precious for any extraneous FBOs, or cases where the pickable objects are few enough that the raw I/O and GPU context switch overhead of GPU picking swamps the simple transformations and collision detection algorithm execution time. The vast majority of useful 2D elements are rectangles, which even with basic non-crazy transformations end up being trapezoids, and anything more complex than that is probably not something that needs to be (or even should be) clickable anyway. Likewise, pixel-perfect picking is totally unnecessary; any UI that actually requires that is a UI I don't ever want to have to use.
... and if the glGetPixels is being used for something else legitimate, ignore me.
And it no longer matters who is right or wrong, but that it really isn't relevant.
Leave a comment:
-
Originally posted by squirrl View PostPut a 1000 wallpaper images in /usr/share/backgrounds and open up the desktop preferences. You'll be waiting a while as it loops through it's load/scale algorithm.
Just saying Gnomes got slag in the welds.
Leave a comment:
-
Put a 1000 wallpaper images in /usr/share/backgrounds and open up the desktop preferences. You'll be waiting a while as it loops through it's load/scale algorithm.
Just saying Gnomes got slag in the welds.
Leave a comment:
-
Originally posted by agd5f View PostIIRC, gnome-shell uses a fair amount of glGetPixels which isn't currently accelerated. However, it should be possible to use the new blit code to accelerate it.
Leave a comment:
-
IIRC, gnome-shell uses a fair amount of glGetPixels which isn't currently accelerated. However, it should be possible to use the new blit code to accelerate it.
Leave a comment:
-
So anyway,
I've been running Git builds of the Radeon stack for a bit now. I found recently that Gnome-Shell == MASSIVE FPS drop.
Using KMS no Gnome-Shell my HD4530 gets around 1200Fps in glxgears (not a good benchmark. But a damn good sanity/regression tester)
Load up Gnome-shell. BAM 300fps.
Also if you try and run shell in UMS it basically grinds to a halt. Whats insane is if you run glxgears you get 2100fps. (well you get that in UMS regardless) But what is insane is all parts of Gnome-shell and anything that uses composting all run horridly. it looks so insane to be typing at one letter per 10 seconds in a console window and have glxgears SMASHING ALONG at 2100fps.
Anybody else running Gnome-shell on the OS driver stack???
(also QTFIf your monitor is 75hz, then the game will run at 75 FPS with vsync. Simple as that.
That being said, can both you and elanthis take your pissing match out of a thread titled "Technical support and discussion of the open-source Radeon, RadeonHD, and Avivo drivers."
I really don't care how high up you are or long you've been here. Your both acting like children. And it no longer matters who is right or wrong, but that it really isn't relevant. )
Leave a comment:
Leave a comment: