Announcement

Collapse
No announcement yet.

Max 3D testure size in the radeon driver

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Max 3D testure size in the radeon driver

    Hi, Gurus,

    I have an ATI R300 card on my T43. When I am setting compiz fusion with mesa 7.6 and the radeon driver that comes with Ubunt9.10 on a dual monitor with totally 3080*1050 (1400*1050+1680*1050) virtual resolution. There will be a warning that the max texture size is 2048. If I ignore the check, the compiz will work only on an area of 2048*1050, and the rest part of the screen is an ugly tearing stripe. I have seen some online work-arounds talking about using a vertical combination, but my vertical combination is 2100 which also exceeds the limit.

    Now if I understand correctly about how hardware works, each pix is mapped to a memory block on the video card (to store what color it should show). Therefore due to the fact that 2048*2048 > 3080*1050, the limitation is not really a hardware limitation, instead it is a problem with the driver. And it seems the solution should not be too complicated, instead of using a two dimensional array (which is easy but waste space), one extra function is needed to convert it to an single array and back.

    Correct me if I am wrong. If I am correct, is there a chance anyone could fix the problem?

  • #2
    It's not that simple. Your GPU doesn't treat the buffer as linear, but actually as two-dimensional. Writing a wrapper function won't work, as the functionality you'd need to change is implemented in hardware.

    One could extend compiz to make it draw to multiple smaller textures with openGL first, then do some unaccelerated trickery to copy each into the scanout buffer. But that's complicated and very slow.

    long story short: you're out of luck. I'm not sure if XRender has the same limitations though, maybe kwin4 in XRender mode works better for you?

    Comment


    • #3
      Thanks for the reply.

      When you say "Your GPU doesn't treat memory as linear...", can you be more specific?

      From my previous experience with assembly code, the screen will draw a point if you pass a int value to an address. (Way back in my undergrad course, but I assume the architecture should still remain the same).

      I don't know what you mean the GPU deos NOT treat memory as linear. The memory space does NOT have any dimension, all dimesions are for indexing purposes.

      If we see from another point of view, in Windows, I could run 3d application on both of my screen (google earth would be my example). However in linux, 3d rendering only works partially on my second screen (which i assume it exceeds the limit), and I will only see half of the earth in google earth if I move the application to the second screen). Therefore it does not look like a problem with the hardware.

      If you're talking about using some hardware embeded function (which takes a 2d array as input parameter) to do the drawing. Can we convert the pix mapping before we pass it into the embeded function? In this case, it would be a 2d array (actual pix position) to 2d array (memory block position) convertion.

      Comment


      • #4
        Hmm, are you two trying to talk of graphics memory tiling?

        Comment


        • #5
          Also, people don't really use the whole extended desktop for 3d application. Rather the window is only maximized to one monitor. Therefore is there a way to update the two monitor sperately?

          Comment


          • #6
            The GPU hardware operates in 2D space, ie it uses separate X and Y counters. The texture limits (and render target limits) come from the X and Y counters.

            It is certainly possible to remove (or at least minimize) the dependency on texture size, but it would need more than just driver changes. Most of the current stack operates on the RandR model, where multiple monitors provide viewports into a single large 2D area which spans all the screens. The "shatter" work is one way to minimize the impact of texture limits, as is the "Multiview" feature built into the fglrx drivers, but neither of those are simple.
            Last edited by bridgman; 12 October 2009, 01:51 PM.
            Test signature

            Comment


            • #7
              Originally posted by nanonyme View Post
              Hmm, are you two trying to talk of graphics memory tiling?
              Hi, do you know what is exactly the embeded funtion that cause this limitation? I mean what's the input and output?

              From my understanding, there is no problem with drawing on the whole extended desktop. (since drawing are all 2d stuff and my 2d extended desktop has no problem at all). Rather there is a limitation of an embeded 3d processing function. If the GPU is simply doing the caculation (not drawing), then it would be possible to first pass the information about monitor 1 to the function and use its return value to draw on monitor 1 first. Then pass the value of monitor 2 to it.

              Since drawing on screen is not a time consuming thing, sperating the monitor will not be too inefficient (at most performance cut in half when extended desktop is used). But then the limitation of screen size is on one monitor only, not two as a whole.

              Comment


              • #8
                Originally posted by bridgman View Post
                The GPU hardware operates in 2D space, ie it uses separate X and Y counters. The texture limits (and render target limits) come from the X and Y counters.

                It is certainly possible to remove (or at least minimize) the dependency on texture size, but it would need more than just driver changes. Most of the current stack operates on the RandR model, where multiple monitors provide viewports into a single large 2D area which spans all the screens. The "shatter" work is one way to minimize the impact of texture limits, as is the "Multiview" feature built into the fglrx drivers, but neither of those are simple.
                If I don't need a streched desktop cube on the whole screen, do you think caculate two screens spereately would work? It seems the only application that reaquire a 3d drawing over the whole extended desktop would be the cube.

                Comment


                • #9
                  Originally posted by grazzmudhorze View Post
                  Hi, do you know what is exactly the embeded funtion that cause this limitation? I mean what's the input and output?
                  Currently viewspace is a single x * y block of memory that you have to fit all monitors in. (and very easily run into problem with them fitting in) What you want is something like this. http://corbinsimpson.com/content/shattered It'll be done when it's ready.

                  Comment


                  • #10
                    Originally posted by grazzmudhorze View Post
                    TFrom my previous experience with assembly code, the screen will draw a point if you pass a int value to an address. (Way back in my undergrad course, but I assume the architecture should still remain the same).
                    that course wasn't in this century, was it?

                    Originally posted by grazzmudhorze View Post
                    Also, people don't really use the whole extended desktop for 3d application. Rather the window is only maximized to one monitor. Therefore is there a way to update the two monitor sperately?
                    sure, zaphod mode. You'll lose the ability to drag windows across the screens, though, since they become completely independant. IIRC zaphod mode with current Xorg / OS drivers has some trouble due to xrandr changes though.

                    Comment

                    Working...
                    X