Originally posted by bridgman
View Post
Announcement
Collapse
No announcement yet.
ATI R600/700 OSS 3D Driver Reaches Gears Milestone
Collapse
X
-
-
I believe the idea of "buffer swap" dates back to the days when OpenGL apps usually ran full screen and an overlay was floated in front for menus etc...
If the app is running full screen then AFAIK it is possible to do an actual swap, usually called "page flipping" in X/DRI-speak. If the app is not running full screen, however, then you need to copy from back buffer (hidden) to front buffer (screen) without affecting the other content which is already on the screen.
That said, I don't see page flipping on fullscreen apps used as much as I would expect, not sure why. It may be that the growing use of compositors (which add their own copy step anyways) is displacing the traditional "OpenGL owns the screen" style of operation, or it may just be a lower priority than all the other big changes currently being made in the stack. A high-end GPU can do a full-screen copy in well under a millisecond anyways, and even a low-end GPU only takes a few milliseconds.Last edited by bridgman; 16 July 2009, 09:19 AM.Test signature
Comment
-
Right, of course. Didn't think that all the way through. For windowed apps a copy makes sense. as they effectively share the front-buffer with the window manager (in DRI1 at least). Guess the overhead is simply smaller than I imagined. 'Twas just something about copying that triggered my premature optimization circuits.
Comment
-
Originally posted by bridgman View PostA high-end GPU can do a full-screen copy in well under a millisecond anyways, and even a low-end GPU only takes a few milliseconds.
Fglrx seems to enable page flipping on un-re-directed fullscreen apps but fall back to copying when the window becomes redirected (e.g. a background window pops-up or you leave fullscreen). You get a momentary flicker, but it's a good compromise between performance and visual quality. Older Windows drivers (esp. nvidia ones) used to flicker horribly whenever a notification / menu came in front of an OpenGL/D3D app.
Comment
-
Originally posted by BlackStar View PostOlder Windows drivers (esp. nvidia ones) used to flicker horribly whenever a notification / menu came in front of an OpenGL/D3D app.
Comment
-
Originally posted by nanonyme View PostYeah, now it only flickers where the notification is. Sometimes a bit annoying if you're trying to concentrate and the notification jumps in and out in less than a second intervals before calming down.
Comment
-
Originally posted by tormod View PostFrom what I understand (I have edited a previous post of mine about this) you still need to use libdrm from Alex' repo. But you can _build_ your mesa with libdrm from git master.
Comment
-
Originally posted by BlackStar View PostMore like microseconds, actually. 1080p is ~7.9MB per frame, which translates needs something between 80μs (ultra high-end GPUs with GDDR5 memory) to 4ms (ultra low end Intel IGPs with single-channel DDR2 shared memory).
I was thinking of "high end" as starting somewhere around 50-60GB/s peak bandwidth, say 3850/4850 and up, where the copy time would be 0.4-0.5 mS. I agree that if you go right up to 4870/4890 you can probably cut that in half again.Last edited by bridgman; 16 July 2009, 12:20 PM.Test signature
Comment
-
I just retried the test with compositing disabled, but that didn't change anything.
EDIT: Switched libdrm repo to the one from Alex, but now I only get the software renderer recognized by glxinfo and glxgears (which works of course).Last edited by LiquidAcid; 16 July 2009, 12:33 PM.
Comment
Comment