I think ETQW has some kind of mega texture technology which loads and unloads textures during game play. Yeah, it can be a bad app for a bad memory manager but it's the only way to get as much content on the screen as possible while using as little memory as possible.
Announcement
Collapse
No announcement yet.
Radeon VRAM Optimizations Coming, But Help Is Needed
Collapse
X
-
Originally posted by curaga View PostWhat do you want to know? I currently have 96 traces, but people are still sending some (thanks! Very soon I won't be able to take more, probably will ask oibaf to remove it after this week).
@log0
I bet Minecraft is just plain badly written. The Source games use a DX-GL wrapper which likely causes some of that.
That's fairly stupid, because the saved cpu overhead is eaten up by the allocation cpu overhead.
Are you tracing buffer deallocation and lifetime? This might be helpful in getting a better idea of what this games are doing, and would be a simple way to invalidate my guess.
Comment
-
Originally posted by marek View PostI think ETQW has some kind of mega texture technology which loads and unloads textures during game play. Yeah, it can be a bad app for a bad memory manager but it's the only way to get as much content on the screen as possible while using as little memory as possible.
The ideal app would allocate everything at startup, in size order, and then reuse those buffers til eternity with nothing new. Conversely a bad app causes much churn (it doesn't matter if it's direct or indirect, ie done by the app itself or by the driver in response to the app), making it hard to get good performance.
To be explicit, what I meant is that they call glBufferData with NULL to discard previous data, avoid stalls when mapping the buffer. They don't generate new buffers. Is mesa handling such cases (with some double/ring buffer scheme)?
Are you tracing buffer deallocation and lifetime? This might be helpful in getting a better idea of what this games are doing, and would be a simple way to invalidate my guess.
Comment
-
Originally posted by curaga View PostThat's fairly stupid, because the saved cpu overhead is eaten up by the allocation cpu overhead.
while(1) {
create_buffer();
use_buffer();
release_buffer();
}
Actually, a lot of code in Mesa doesn't worry about the allocation overhead and assumes this optimization is implemented.
Comment
-
Originally posted by log0 View PostThat will depend on the allocator I guess. To be explicit, what I meant is that they call glBufferData with NULL to discard previous data, avoid stalls when mapping the buffer. They don't generate new buffers. Is mesa handling such cases (with some double/ring buffer scheme)?
Comment
-
-
-
Here a quick and dirty buffer creation(blue), destruction(red) one second resolution plot from tf2_1.bin:
The number of creations and destructions are quite close most of the time. Would be interesting to know what happened between 40-180 and 180-280 seconds.
I'd like to look at the amount of memory allocated/released (and maybe lifetime distribution), will post a plot a bit later.Last edited by log0; 04 February 2014, 02:21 PM.
Comment
Comment