No announcement yet.

AMD fglrx 8.42.3 leaking gobs of memory in OpenGL apps - any known workaround ?

  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by Alistair View Post
    Now for the killer laugh of the week.

    I ***have not touched my configurations one iota since the previous tests ........

    What I DID do was power down the box and put my good old trusty 9550Pro 256M card back in the box, boot, start up , and run both glxgears and fgl_fglrxgears

    Either there is *no* memory leak with this card, or the rate at which it leaks memory is ****substantially*** lower.....
    Different chipsets, different code pathways within the driver. It's entirely conceivable that this really IS the case here.


    • #22
      Originally posted by Svartalf View Post
      Different chipsets, different code pathways within the driver. It's entirely conceivable that this really IS the case here.
      Certainly I figure that's the case. Now to build one.....
      (goes back to collecting the evidence)


      • #23
        i am curious to know why it doesnt happen on my comp

        i ran fgl_glxgears for 2 hrs straight and still the memory utilization from proc filesystem i got was abt 67 MB constant for the whole period..

        X200 chipset (intel) FC4 Xorg 6.8.2


        • #24
          Originally posted by TheIcebreaker View Post
          X200 chipset (intel) FC4 Xorg 6.8.2
          You've pegged why. That's an R300 derivative chip. It works differently than the any of the R400, R500, and R600 chips. It's using/stressing different code paths in the primitive ops layer when you're doing things.


          • #25
            Originally posted by Snake View Post
            We're on linux, where a bug is no "embarrassing taboo", but something that happens all the time (me being the exception, of course ).
            Heh... In this case, unfortunately, this is more of an embarrassing error of not doing proper QA on the part of AMD. Never mistake that I think that they are not some of the brightest developers and coders there are in the OpenGL space (they are some of the best...but they're very, very few...and I suspect that they're more Windows developers than Linux ones and haven't a clue about things like Valgrind, Oprofile, or even VTune on Linux. (Emphasis added to give 'em a hint as to what to go use- they DO read this forum, make NO mistakes on that score... ))

            I blame their employer, formerly known as ATI for not even remotely taking this seriously enough when they were ATI. I blame their employer, now known as AMD, for not taking this seriously enough when they took the company over. I can only hope they realize that what we're being handed would pretty much nuke them from orbit in the Windows world and it's about to do that in what might be one of their only future markets.


            • #26
              Interesting Observations

              A few interesting (to me, at least) commonalities based on my own informal testing and the posts of other members:

              The memory leak seems to be directly proportional to frame rates; running glxgears, I observed that increasing the frame rate (by shrinking or hiding the window) proportionally increased the memory consumption, while decreasing frame rate (by expanding the window or moving it around on the screen) decreases both frame rate and the rate of memory loss.

              The size and complexity of the frame being drawn seems to have no impact on the rate of memory loss (other than slowing it down by reducing frame rate); fgl_glxgears runs slower and loses memory slower than glxgears; running Doom3 (on my slow X1400M, at least), the leak is almost not noticeable.

              The trend of posts I have read seems to indicate the older or lower-end cards do not suffer from the leak or that the leak is so much reduced as to be unnoticeable.

              This seems to indicate that the leak is tied to code that runs a relatively fixed number of times per frame swap / redraw. It also makes me wonder if the leak is tied to a portion of the driver code used only by cards supporting a more recent / advanced feature. The fact that the biggest leak seems to originate from somewhere in XF86DRIGetDeviceInfo (according to valgrind) might bear this out...

              Ironically, it seems that the more advanced / expensive the card, the faster the frame rate and (consequently) the faster the memory leak.


              • #27
                Here's the leak summary of my valgrind run..The definitely lost line is much smaller than snakes.
                Guess my x800pro/420 chipset isn't showing much of a hit.
                I wonder if I ran x86_64 if it would be more pronounced.

                =16992== LEAK SUMMARY:
                ==16992==    definitely lost: 216 bytes in 63 blocks.
                ==16992==    indirectly lost: 2,104 bytes in 8 blocks.
                ==16992==      possibly lost: 1,488 bytes in 32 blocks.
                ==16992==    still reachable: 18,362,334 bytes in 2,930 blocks.
                ==16992==         suppressed: 0 bytes in 0 blocks.
                ==16992== Reachable blocks (those to which a pointer was found) are not shown.
                ==16992== To see them, rerun with: --leak-check=full --show-reachable=yes
                Those who would give up Essential Liberty to purchase a little Temporary Safety,deserve neither Liberty nor Safety.
                Ben Franklin 1755


                • #28
                  Beat this!

                  glxgears memory usage as reported by 'ps' on my Thinkpad Z61m with an ATI X1400 Mobility:

                  Time (s) --- RAM: VSZ / RSS (KB)

                  0 --- 21744 / 9952
                  5 --- 53368 / 43896
                  10 --- 67496 / 58884
                  15 --- 81752 / 73188
                  20 --- 95744 / 87248
                  25 --- 109736 / 101232
                  30 --- 123728 / 115240
                  35 --- 137720 / 129240
                  40 --- 151712 / 143256
                  45 --- 165704 / 157248
                  50 --- 179696 / 171252
                  55 --- 193688 / 185212
                  60 --- 207812 / 199408

                  Hmm... so after the fast bump in memory usage during the initial 5 seconds, glxgears grabs 14 about megs of memory per 5 seconds. After one minute, glxgears has grabbed 200 megs. That is the single most disastrous memory leak I have ever seen. Nice.

                  As someone mentioned before, this is presumably a single simple bug being iterated over and over. It shouldn't be to difficult for ATI/AMD to fix this.
                  Last edited by korpenkraxar; 11-17-2007, 12:56 PM. Reason: Added a little more info


                  • #29
                    Because they don't Valgrind or Oprofile things and the QA people probably didn't test against the cards that seem to have the serious leak issue.
                    at times like this i usually think to myself "what the hell is that betatester program for?"

                    this really should have come out in beta tests. unless we're expected to be betatesters now.

                    on the other hand i guess that's what the beta warning in release notes page is for.


                    • #30
                      Originally posted by yoshi314 View Post
                      at times like this i usually think to myself "what the hell is that betatester program for?"

                      this really should have come out in beta tests. unless we're expected to be betatesters now.
                      Agreed. I could even put up with us users being beta-testers if only I had the feeling our discoveries and reports made some sort of difference.