Announcement

Collapse
No announcement yet.

RV350, compositing and horrible performance.

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by oliver View Post
    Well I also noticed that AGP mode was forced to 1. (Yes, AGP remember? ) This bug report mentions it and fixes it: https://bugs.launchpad.net/ubuntu/+s...ux/+bug/544988
    Someone probably added a quirk to force their system to AGP 1x for stability that also applies to your system. I wouldn't worry too much about it. I double you notice any difference in performance.

    Originally posted by oliver View Post
    Before the AGP change, this is what i found in xorg.log
    [ 11.498] (II) RADEON(0): mem size init: gart size :fdff000 vram size: s:4000000 visible:3a1c000
    does look like 64 MiB for vram?
    Yes, 64MB. Note that the number printed here is not total vram, but the maximum amount that the CPU can map. In your case, they are the same.

    Originally posted by oliver View Post
    [ 11.499] (II) RADEON(0): [DRI2] DRI driver: r300
    [ 11.499] (II) RADEON(0): [DRI2] VDPAU driver: r300
    why not r300g? or this just a nameing thing
    r300g is just want we call the r300 gallium driver. The actual lib is r300_dri.so. It's the same name whether you are using the r300 gallium driver or the old r300 classic mesa driver.

    Originally posted by oliver View Post
    [ 11.499] (II) RADEON(0): Front buffer size: 5808K
    [ 11.499] (II) RADEON(0): VRAM usage limit set to 48297K

    and later:
    [ 42.599] (II) RADEON(0): VRAM usage limit set to 50760K


    After reboot btw, those limits remain. So why a limit lower then available VRAM?
    The the remaining amount of CPU mappable vram remaining after the initial ddx allocations (front buffer, cursors, etc.)

    Comment


    • #22
      I did figure as much, but just wanted to double check.

      AGPmode=4 does FEEL faster Won't AGPmode affect memory bandwith from/to the card? E.g. when swapping via gart?

      Comment


      • #23
        Originally posted by oliver View Post
        I did figure as much, but just wanted to double check.

        AGPmode=4 does FEEL faster Won't AGPmode affect memory bandwith from/to the card? E.g. when swapping via gart?
        In theory yes, in practice it doesn't really make a lot of difference.

        Comment


        • #24
          I remember when the first PCIe cards were released just about every review I read said the bandwidth was orders of magnitude overkill. That much bandwidth simply could not be utilized.

          Comment


          • #25
            Well, ideally, you want to minimize the amount of traffic going across the bus. If you have to migrate a lot of data back and forth, you've already lost.

            Comment


            • #26
              Originally posted by duby229 View Post
              I remember when the first PCIe cards were released just about every review I read said the bandwidth was orders of magnitude overkill. That much bandwidth simply could not be utilized.
              It depends on the card - newer ones will show improvements even going from PCIe2 to PCIe3, at least in certain tests. At the time, I don't think anything was maxing out AGP8x, but manufacturers knew that those cards were coming.

              Comment


              • #27
                It likely uses some cpu fallback. How to check that, I have no idea


                Comment


                • #28
                  Originally posted by agd5f View Post
                  What is your question? vram has much better bandwidth compared to system memory from the GPU's perspective so in most cases it's preferred to store buffers in vram. If we don't have enough vram to cover all the requirements, we may end up with thrashing. There's probably room for improvement with respect to the heuristics used to decide which pools we allocate from and whether we migrate or not. A ttm de-fragmenter would probably also be helpful. Either of these are good projects that don't require low level GPU specific knowledge and could provide nice performance improvements.
                  I seem to have the same issue on my RV620 not with the desktop but with games that dont seem to be all that graphically intense.
                  Are you able to gives some more details on where exactly one would look to start inplementing a ttm de-fragmenter? I'm not sure if I'm up to the task but it does sound interesting and I would be keen to at least have a look around.

                  Thanks for your time.
                  Last edited by timothyja; 09 May 2013, 03:48 AM.

                  Comment


                  • #29
                    Originally posted by Ray7brian2 View Post
                    It likely uses some cpu fallback. How to check that, I have no idea
                    Gallium doesn't support software fallbacks. The driver may still be doing something inefficiently however.

                    Comment


                    • #30
                      Originally posted by timothyja View Post
                      I seem to have the same issue on my RV620 not with the desktop but with games that dont seem to be all that graphically intense.
                      Are you able to gives some more details on where exactly one would look to start inplementing a ttm de-fragmenter? I'm not sure if I'm up to the task but it does sound interesting and I would be keen to at least have a look around.

                      Thanks for your time.
                      First try a newer kernel and version of mesa. There have been a lot of performance improvements in 9.1 for example. Also try adjusting the CPU governor as per other comments in this thread. For the ttm defragmentor, see drivers/gpu/drm/ttm/ in the kernel source.
                      Last edited by agd5f; 09 May 2013, 09:36 AM.

                      Comment

                      Working...
                      X