Announcement

Collapse
No announcement yet.

:confused: XPress 200M troubles with git mesa/xf86-video-ati

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • :confused: XPress 200M troubles with git mesa/xf86-video-ati

    Hi,

    I've been running under the OSS driver for almost a year now and it finally gets faster and faster each release. Great !

    1st Bug :
    The yesterday, I recompiled mesa and got a great performance boost. Enough to be able to play in fact. Thus, I tried to stress the driver and see what happens. The stability is excellent, I haven't been able to crash it while playing.
    But, when trying to change the resolution, my screen changes the resolution properly but game content is gone and computer seems to be stuck.

    In my Xorg logs, nothing seems to be wrong, this is the line added to the log when I launch a game :
    Code:
    init memmap
    init common
    init crtc1
    init pll1
    freq: 78750000
    best_freq: 78760000
    best_feedback_div: 33
    best_frac_feedback_div: 0
    best_ref_div: 2
    best_post_div: 3
    restore memmap
    (II) RADEON(0): RADEONRestoreMemMapRegisters() : 
    (II) RADEON(0):   MC_FB_LOCATION   : 0x7fff7800 0x7fff7800
    (II) RADEON(0):   MC_AGP_LOCATION  : 0x81ff8000
    restore common
    restore crtc1
    restore pll1
    finished PLL1
    set RMX
    set TVDAC
    enable TVDAC
    disable LVDS
    So, any idea of what could go wrong ?

    2nd Bug :
    When I go on the radeon feature matrix on the wiki of freedesktop, I see textured video should be faster than overlay video.
    In fact, it depends on the video size and on the screen size.
    I would like to add that I'm using a 26" screen running at the resolution 1920*1200.

    For example, When playing an HD video (720p) fullscreen, I'm able to play the video on my screen but I get some tearing.
    When watching a small video fullscreen, the framerate drops dramatically to 3 or 4 fps.

    The video overlay works great on both cases but it looks awful on my screen (it was looking great with my previous 19" screen). What I get is a lot of aliasing when playing at the native video size. In full screen, it gets smoother but still not as good as textured video.

    So, here is my guess. It seems like the video overlay is a texture of the size of the screen that is resized to fit to the video player's window. it could explain why I get a poor video quality due to the fact that the video card would not resize the picture using cubic algorithms. Am I wrong ?

    I really hope there is room for improvement, I don't want to change my computer now :s

    Once again, I am ready to help and try whatever you would like me to try. With a friend, we may help this summer with the implementation of VDPAU in gallium3D. He asked for it for the gsoc, but has been refused.

  • #2
    A lot of games can't handle having the resolution changed under them. Are you changing res via game controls or... ?

    The "textured video is faster than overlay" info is not correct, even though it's in a wiki

    "Better" might be more appropriate; you have more filtering options and can use it with a compositor, but slow chips may not have enough horsepower to render the video and do the compositing. Are you running with Compiz on or off ? At minimum you might want to try "Unredirect Fullscreen Windows". I don't remember the option for disabling bicubic filtering on the Xv scaleup but that might also be worth a try.

    The video overlay is a hardware buffer the size of the video stream's native resolution, scaled up in hardware then fed into the display stream going to your screen (bypassing the frame buffer). Filtering depends on the chip, not sure if the filtering level goes down as the screen res goes up but I imagine something has to give somewhere.
    Last edited by bridgman; 10 May 2009, 10:57 AM.
    Test signature

    Comment


    • #3
      Originally posted by bridgman View Post
      A lot of games can't handle having the resolution changed under them. Are you changing res via game controls or... ?
      Yes, I'm changing the resolution via the game controls.

      Originally posted by bridgman View Post
      The "textured video is faster than overlay" info is not correct, even though it's in a wiki

      "Better" might be more appropriate; you have more filtering options and can use it with a compositor, but slow chips may not have enough horsepower to render the video and do the compositing.

      The video overlay is a hardware buffer the size of the video stream's native resolution, scaled up in hardware then fed into the display stream going to your screen (bypassing the frame buffer). Filtering depends on the chip, not sure if the filtering level goes down as the screen res goes up but I imagine something has to give somewhere.
      OK, are there any Xorg/DRI options for me to get a better filtering with the "video overlay" or to lower the filtering with "textured video" ?
      Otherwise, is textured video really greedy and a replacement like VDPAU would allow me to get a smooth video experience or there ain't any chance for me to get this ?

      Comment


      • #4
        APIs like VDPAU add GPU support for decode, but they all end up using shaders for the final (Xv) stages, so VDPAU would help if you were CPU-limited. Then again, there's no decoding hardware in your GPU anyways other than the old IDCT engine so VDPAU / VA-API / XvBA aren't really options anyways.
        Test signature

        Comment


        • #5
          Originally posted by bridgman View Post
          APIs like VDPAU add GPU support for decode, but they all end up using shaders for the final (Xv) stages, so VDPAU would help if you were CPU-limited. Then again, there's no decoding hardware in your GPU anyways other than the old IDCT engine so VDPAU / VA-API / XvBA aren't really options anyways.
          Arg, so, no way to change the filtering through some options in xorg.conf of "overlay video" or "textured video" ?

          Thanks a lot for the explanation.

          Comment


          • #6
            Possibly not via the conf, but according to the radeon man page you can control bicubic via xvattr :

            .SH TEXTURED VIDEO ATTRIBUTES
            The driver supports the following X11 Xv attributes for Textured Video.
            You can use the "xvattr" tool to query/set those attributes at runtime.

            .TP
            .BI "XV_VSYNC"
            XV_VSYNC is used to control whether textured adapter synchronizes
            the screen update to the monitor vertical refresh to eliminate tearing.
            It has two values: 'off'(0) and 'on'(1). The default is
            .B 'on'(1).

            .TP
            .BI "XV_BICUBIC"
            XV_BICUBIC is used to control whether textured adapter should apply
            a bicubic filter to smooth the output. It has three values: 'off'(0), 'on'(1)
            and 'auto'(2). 'off' means never apply the filter, 'on' means always apply
            the filter and 'auto' means apply the filter only if the X and Y
            sizes are scaled to more than double to avoid blurred output. Bicubic
            filtering is not currently compatible with other Xv attributes like hue,
            contrast, and brightness, and must be disabled to use those attributes.
            The default is
            .B 'auto'(2).
            Test signature

            Comment


            • #7
              Thanks a lot ! It helped a bit but it is still not sufficient.

              I would have hoped the GPU would have more horsepower :s
              I'll cope with it as much as possible and wait until r700 gets a proper 3D support.

              Comment


              • #8
                It wouldn't hurt to check CPU utilization and clock speed. The Xpress 200 uses system memory, so the CPU clock can affect how fast the GPU can run.

                IIRC the Xpress 200 is a 2-pipe chip running at a few hundred MHz, so in an absolutely perfect world it can write maybe 600M pixels/sec. At your screen res you're writing over 120M/sec just for a basic Xv pass; if you add a couple of texture reads that's probably getting close to maxing out the available bandwidth since the CPU is still pushing a lot of memory around as well.

                Anyways, make sure you're not maxed out on CPU and that the CPU is running at full speed. It wouldn't hurt to try turning off the XV_VSYNC option as well; that also slows down rendering by waiting for the screen refresh and redraw operations to be in a proper relationship.
                Test signature

                Comment


                • #9
                  I checked the clock speed. it is was already at its maximum.
                  But CPU utilisation is above 50% and is sometimes capsed at 50% (so, one core at full time).

                  Maybe mesa/gallium3D can help on this part, true ?

                  Comment


                  • #10
                    Mesa (OpenGL) on its own probably won't help, but decode acceleration written over Gallium3D could reduce CPU utilization at the expense of higher GPU load. If your GPU does turn out to be the bottleneck then putting more load on the GPU may not work for you. Try using xvattr to turn off the XV_VSYNC attribute; that may get your frame rate up at the expense of more tearing.

                    The best solution for your performance problem might be for us to trade displays -- I'll send you my 15" display in return for your 24" screen
                    Last edited by bridgman; 10 May 2009, 05:22 PM.
                    Test signature

                    Comment

                    Working...
                    X