Announcement

Collapse
No announcement yet.

Radeon VRAM Optimizations Coming, But Help Is Needed

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • I think ETQW has some kind of mega texture technology which loads and unloads textures during game play. Yeah, it can be a bad app for a bad memory manager but it's the only way to get as much content on the screen as possible while using as little memory as possible.

    Comment


    • Originally posted by curaga View Post
      What do you want to know? I currently have 96 traces, but people are still sending some (thanks! Very soon I won't be able to take more, probably will ask oibaf to remove it after this week).

      @log0

      I bet Minecraft is just plain badly written. The Source games use a DX-GL wrapper which likely causes some of that.



      That's fairly stupid, because the saved cpu overhead is eaten up by the allocation cpu overhead.
      That will depend on the allocator I guess. To be explicit, what I meant is that they call glBufferData with NULL to discard previous data, avoid stalls when mapping the buffer. They don't generate new buffers. Is mesa handling such cases (with some double/ring buffer scheme)?

      Are you tracing buffer deallocation and lifetime? This might be helpful in getting a better idea of what this games are doing, and would be a simple way to invalidate my guess.

      Comment


      • Originally posted by marek View Post
        I think ETQW has some kind of mega texture technology which loads and unloads textures during game play. Yeah, it can be a bad app for a bad memory manager but it's the only way to get as much content on the screen as possible while using as little memory as possible.
        Well for an ideal app you wouldn't need a mem manager at all

        The ideal app would allocate everything at startup, in size order, and then reuse those buffers til eternity with nothing new. Conversely a bad app causes much churn (it doesn't matter if it's direct or indirect, ie done by the app itself or by the driver in response to the app), making it hard to get good performance.

        To be explicit, what I meant is that they call glBufferData with NULL to discard previous data, avoid stalls when mapping the buffer. They don't generate new buffers. Is mesa handling such cases (with some double/ring buffer scheme)?
        Yes, that causes an allocation and a free for the old buffer, no sophisticated caching that I know of. Avoids GPU stall for the old buffer, but causes a different stall as the mem manager does its thing. It's usually a win yes, unless the allocation causes eviction, then it's very bad.

        Are you tracing buffer deallocation and lifetime? This might be helpful in getting a better idea of what this games are doing, and would be a simple way to invalidate my guess.
        Lifetime - not explicitly, but you can calculate that from the data (destruction minus creation timestamp). The data is open, you're welcome to hack: http://github.com/clbr/hotbos

        Comment


        • Originally posted by curaga View Post
          That's fairly stupid, because the saved cpu overhead is eaten up by the allocation cpu overhead.
          The driver doesn't actually release buffers immediately. Instead, it adds them to a list, so that when an app asks for a new buffer, it will get the buffer from the list. It only returns buffers not being used by the GPU. Therefore, you can do things like this and there is no allocation overhead:

          while(1) {
          create_buffer();
          use_buffer();
          release_buffer();
          }

          Actually, a lot of code in Mesa doesn't worry about the allocation overhead and assumes this optimization is implemented.

          Comment


          • Originally posted by log0 View Post
            That will depend on the allocator I guess. To be explicit, what I meant is that they call glBufferData with NULL to discard previous data, avoid stalls when mapping the buffer. They don't generate new buffers. Is mesa handling such cases (with some double/ring buffer scheme)?
            Yes, glBufferData(NULL) reallocates the buffer. Yes, our driver handles this case efficiently using the optimization I briefly described above.

            Comment


            • Originally posted by curaga View Post
              The ideal app would allocate everything at startup, in size order, and then reuse those buffers til eternity with nothing new.
              That would be 16 GB of VRAM allocated at startup for GTA IV. I wonder why they didn't do that.

              Comment


              • Thanks for the correction, I don't remember seeing that anywhere.

                Originally posted by marek View Post
                That would be 16 GB of VRAM allocated at startup for GTA IV. I wonder why they didn't do that.
                Rockstar's artists clearly suck at using texture space effectively, must be!

                Comment


                • Here a quick and dirty buffer creation(blue), destruction(red) one second resolution plot from tf2_1.bin:



                  The number of creations and destructions are quite close most of the time. Would be interesting to know what happened between 40-180 and 180-280 seconds.

                  I'd like to look at the amount of memory allocated/released (and maybe lifetime distribution), will post a plot a bit later.
                  Last edited by log0; 02-04-2014, 01:21 PM.

                  Comment


                  • I had a quick look at buffer size in 40-180 seconds area, min at about 256Bytes, max 1-2MB, and average at 80-200KB. A lot of small buffer allocations, could be the GUI maybe.

                    Comment


                    • Ah, nice. Is that custom code, or text processing via statinfo?

                      Comment

                      Working...
                      X