Originally posted by mugginz
View Post
If you allocate it out of card memory, it's ALLOCATED. It's not swapped out. It's not placed on the host machine. It's "active" the moment you go to use it, even if it's not being displayed at the time- and it stays that way until you release it. That's the way it is on Linux. That's the way it is on OSX. That's the way it's on Windows. That 80mb's is used the moment you fire up the display as it's the final canvas that you push bits into. It's the framebuffer. Any off-screen or overlaid rendering targets (windows, pbuffers, etc. are peeled out of that space. You can't pull memory out of the vertex pool to give to the framebuffer pool. You can't pull texture memory out to hand it to framebuffers, but for some operations, you can use it as an offscreen render target to provide dynamic textures for 3D operations. NVidia's probably doing something slightly different such that they don't need triple buffering, but you're still going to need LOTS of card RAM to do three screens at peak resolutions well even with their cards.
I guess their architecture doesn't support that kind of "target" buffer selection from the entire pool as you highlight. As you're able to get a frame locked Eyefininity config like mine via Windows I was hoping that might be possible under Linux as well, but as I've found in the past, Windows current architecture seems to be better suited to my requirements. Perhaps I'll end up switching to it.
You're assuming nVidia are using triple buffering to get their frame lockage. I don't know if they are, but either way, it's not a config I'm going to throw at that nVidia card.
It does bring up the question of what happens on a Windows box running a DX game configured for triple buffering in an Eyefinity setup though.
Comment