Announcement

Collapse
No announcement yet.

r500 kms performance issues

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Hmm, every gallium test I've done has shown it to be slower.

    Originally posted by marek View Post
    You might try the R300 Gallium3D driver, it's generally faster than the classic one and has much more features (the feature set is similar to fglrx).
    I've got an AGP RV350 card and a dual P4 Northwood PC, and for simple loads like celestia and OpenGL xscreensavers I am finding that classic Mesa is still beating Gallium hands down. That's for both "performance" and "correctness"; obviously Gallium beats classic Mesa on "features" ;-).

    For example, try right-clicking on the Earth in celestia and rotating it: it's wonderfully responsive with classic Mesa, but drags horribly with Gallium.

    Gallium renders the stars wrongly, too.

    And don't even think of trying to play World of Warcraft using Gallium, although it can be "persuaded" to play more-or-less correctly under classic Mesa if you're prepared to hack it around slightly:
    Code:
    diff --git a/src/mesa/drivers/dri/r300/r300_cmdbuf.c b/src/mesa/drivers/dri/r300
    index c40802a..7f009d9 100644
    --- a/src/mesa/drivers/dri/r300/r300_cmdbuf.c
    +++ b/src/mesa/drivers/dri/r300/r300_cmdbuf.c
    @@ -452,7 +452,7 @@ static void emit_zb_offset(GLcontext *ctx, struct radeon_sta
            uint32_t dw = atom->check(ctx, atom);
     
            rrb = radeon_get_depthbuffer(&r300->radeon);
    -       if (!rrb)
    +       if ((rrb == NULL) || (rrb->cpp == 0))
                    return;
     
            zbpitch = (rrb->pitch / rrb->cpp);
    diff --git a/src/mesa/drivers/dri/radeon/radeon_common.c b/src/mesa/drivers/dri/
    index 13f1f06..e00c995 100644
    --- a/src/mesa/drivers/dri/radeon/radeon_common.c
    +++ b/src/mesa/drivers/dri/radeon/radeon_common.c
    @@ -1126,7 +1126,7 @@ void radeonFlush(GLcontext *ctx)
                    rcommonFlushCmdBuf(radeon, __FUNCTION__);
     
     flush_front:
    -       if ((ctx->DrawBuffer->Name == 0) && radeon->front_buffer_dirty) {
    +       if ((ctx->DrawBuffer != NULL) && (ctx->DrawBuffer->Name == 0) && radeon-
                    __DRIscreen *const screen = radeon->radeonScreen->driScreen;
     
                    if (screen->dri2.loader && (screen->dri2.loader->base.version >=

    Comment


    • #12
      Was that second part of diff horizontally truncated?

      Comment


      • #13
        Possibly - I'll try again...

        Originally posted by nanonyme View Post
        Was that second part of diff horizontally truncated?
        Code:
        diff --git a/src/mesa/drivers/dri/r300/r300_cmdbuf.c b/src/mesa/drivers/dri/r300/r300_cmdbuf.c
        index c40802a..7f009d9 100644
        --- a/src/mesa/drivers/dri/r300/r300_cmdbuf.c
        +++ b/src/mesa/drivers/dri/r300/r300_cmdbuf.c
        @@ -452,7 +452,7 @@ static void emit_zb_offset(GLcontext *ctx, struct radeon_state_atom * atom)
                uint32_t dw = atom->check(ctx, atom);
         
                rrb = radeon_get_depthbuffer(&r300->radeon);
        -       if (!rrb)
        +       if ((rrb == NULL) || (rrb->cpp == 0))
                        return;
         
                zbpitch = (rrb->pitch / rrb->cpp);
        diff --git a/src/mesa/drivers/dri/radeon/radeon_common.c b/src/mesa/drivers/dri/radeon/radeon_common.c
        index 13f1f06..e00c995 100644
        --- a/src/mesa/drivers/dri/radeon/radeon_common.c
        +++ b/src/mesa/drivers/dri/radeon/radeon_common.c
        @@ -1126,7 +1126,7 @@ void radeonFlush(GLcontext *ctx)
                        rcommonFlushCmdBuf(radeon, __FUNCTION__);
         
         flush_front:
        -       if ((ctx->DrawBuffer->Name == 0) && radeon->front_buffer_dirty) {
        +       if ((ctx->DrawBuffer != NULL) && (ctx->DrawBuffer->Name == 0) && radeon->front_buffer_dirty) {
                        __DRIscreen *const screen = radeon->radeonScreen->driScreen;
         
                        if (screen->dri2.loader && (screen->dri2.loader->base.version >= 2)
        I don't know if either is "correct" in any sense other than they stop Mesa core-dumping when I play WoW.

        Comment


        • #14
          Both of them make sense anyway. Division by zero protection and NULL dereference protection. You probably checked that both are needed for it to work and not just one?

          Comment


          • #15
            Yes, but I still think they paper over deeper problems

            Originally posted by nanonyme View Post
            Both of them make sense anyway. Division by zero protection and NULL dereference protection. You probably checked that both are needed for it to work and not just one?
            The "division by zero" protection produces lots of "no rrb" messages on my console log, which makes me suspect that it should really be trying to divide by something non-zero instead. (Some "state" is probably not being set.)

            The "NULL protection" affects the context's clean-up path.

            These bugs have already been raised in FDO's bugzilla as #27199 and #27141 respectively.

            Comment


            • #16
              Originally posted by marek View Post
              1) KMS is slower because color tiling in DDX is disabled by default. You need to enable it in xorg.conf, see "man radeon". The man page is for UMS so it lies sometimes.
              Why is it disabled? Does it have some known bugs?

              Comment


              • #17
                Originally posted by chrisr View Post
                For example, try right-clicking on the Earth in celestia and rotating it: it's wonderfully responsive with classic Mesa, but drags horribly with Gallium.
                It's absolutely smooth with Gallium here. Are you sure your glxinfo says "Gallium 0.4 on RV350"? If you got "softpipe", you're not using the driver.

                Originally posted by chrisr View Post
                Gallium renders the stars wrongly, too.
                This is a known issue.

                Originally posted by chrisr View Post
                And don't even think of trying to play World of Warcraft using Gallium
                Well the only way to know whether it works is to try it out and see.

                Comment


                • #18
                  Originally posted by oibaf View Post
                  Why is it disabled? Does it have some known bugs?
                  Not that I know of. I guess we should enable it by default.

                  Comment


                  • #19
                    FWIW seems to be on by default in Fedora 13 and current git head of xf86-video-ati.

                    Comment


                    • #20
                      I'm QUITE sure I'm using Gallium with celestia

                      Originally posted by marek View Post
                      It's absolutely smooth with Gallium here. Are you sure your glxinfo says "Gallium 0.4 on RV350"?
                      Code:
                      OpenGL vendor string: X.Org R300 Project
                      OpenGL renderer string: Gallium 0.4 on RV350
                      OpenGL version string: 2.1 Mesa 7.9-devel
                      OpenGL shading language version string: 1.20
                      FWIW, my card is AGP with PCI IDs 1002:4153.

                      I'm suspecting that celestia is suffering from FDO bug #27297 here, because the CPU usage hits the roof with Gallium: a load average of 1.0, vs ~0.2 with Classic. And that's with me just sitting here watching it.

                      Comment

                      Working...
                      X