Announcement

Collapse
No announcement yet.

radeon with DRI2 slower?

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by madman2k View Post
    using driconf has no effect. Neither has disabeling vsync in openarena (where is an option to do so). I probably should ask on IRC or something
    to give the answer from IRC:
    (12:41:30) suokko: vsync is not on but dri2 defaults to tear free rendering
    (12:41:50) suokko: That means that dri2 doesn't update screen when scanout is reading the location that has to be updated
    (12:42:12) suokko: Which results about to vsync for full screen applications
    (12:42:43) suokko: At least for radeon there is no simple option to disable that yet

    Comment


    • #17
      Originally posted by NSLW View Post
      What I want is the best performance on Fedora 11, and i suspect that FBO is required for that. In my reasoning glxgears shows how well card performs. Card have considerably more fps in Ubuntu so Ubuntu is better or maybe shouldn't I look at fps value during my comparison?
      Looking at fps is fine, but do it with a workload similar to what you really want to do with the card. Run a game in benchmark mode or something... that way you have a better chance of getting numbers which will be meaningful.

      What you are seeing here is that as drivers become more sophisticated it's not unusual for them to run faster on complex real-world tasks but slower on artificial, simple tasks. The glxgears program is a classic "artificial, simple task" where more of the time is spent doing clears and buffer swaps than doing 3D drawing, and the DRI2 drivers aren't as fast as DRI1 on those operations. They are slower on other workloads as well (due to being new and un-optimized), but over time I think you will see performance on real-world apps increase out of proportion to performance on apps like glxgears.

      There is also the issue mentioned just above - that the DRI2 code includes some tear-free support which also reduces the frame rate a bit but improves your overall experience.

      So... I guess there are three separate issues here :

      - newness of the DRI2 stack (this will improve with time, but not so much with things like glxgears)
      - design of the stack shifting performance profile from "being fast at simple stuff" to "being fast at complex stuff"
      - no way yet to turn off the tear-free stuff and trade performance for tearing like DRI1
      Last edited by bridgman; 09-20-2009, 10:51 AM.

      Comment


      • #18
        Originally posted by NSLW View Post
        In my reasoning glxgears shows how well card performs. Card have considerably more fps in Ubuntu so Ubuntu is better or maybe shouldn't I look at fps value during my comparison?
        Otherwise you're exactly right except you shouldn't be reading fps from glxgears but, like bridgman recommended, from some game you play. glxgears framerates are utterly uninteresting for everyone except apparently free3d guys. (which is why I recommended not reading that site, they give silly ideas)

        Comment


        • #19
          In fairness the free3d site was most active a few years ago when it was harder to find real world apps you could run across a broad range of drivers. It does desperately need to be updated though...

          Comment


          • #20
            Originally posted by bridgman View Post
            In fairness the free3d site was most active a few years ago when it was harder to find real world apps you could run across a broad range of drivers. It does desperately need to be updated though...
            I tried. It's run by people who refuse to budge a bit in changing the site and are firm believers in that glxgears fps is meaningful. Better just avoid it altogether.

            Comment


            • #21
              Actually, I just noticed that the site has been updated very recently and includes numbers for 5xx and 6xx hardware as well. There is a FAQ that states pretty clearly that they understand glxgears is not a good benchmark, so that's a start.

              Oh well... that's the curse of the internet. Anyone who makes the effort to put up a big collection of useful information ends up getting abuse a couple of years later when the world has changed but their information has become the canonical reference for anyone searching for answers. Retesting everything would be a big task, even with glxgears, but maybe one or two lines at the start of the page might be a good compromise.

              EDIT - I guess in the meantime we could tweak glxgears to add an option to make it at least vaguely useful as a benchmark, by drawing the gears 50 times between calls to glXSwapBuffers or something. It would still suck (if only because every draw would have the same Z values) but would definitely suck less.
              Last edited by bridgman; 09-20-2009, 11:47 AM.

              Comment


              • #22
                Originally posted by bridgman View Post
                EDIT - I guess in the meantime we could tweak glxgears to add an option to make it at least vaguely useful as a benchmark, by drawing the gears 50 times between calls to glXSwapBuffers or something. It would still suck (if only because every draw would have the same Z values) but would definitely suck less.
                Or we could redisable the fps counter and have it instead output useful stuff like OpenGL renderer. (like most of the other demos do)

                Comment


                • #23
                  Originally posted by nanonyme View Post
                  Or we could redisable the fps counter and have it instead output useful stuff like OpenGL renderer. (like most of the other demos do)
                  I think there's an option for that already.

                  Comment


                  • #24
                    Code:
                    #define BENCHMARK
                    
                    #ifdef BENCHMARK
                    
                    /* XXX this probably isn't very portable */
                    ...
                    You mean this?
                    ps. I don't honestly know why code that code is enabled instead of removed since the authors well know that it's platform-dependent and useless.
                    Last edited by nanonyme; 09-20-2009, 11:54 AM.

                    Comment


                    • #25
                      I was thinking of the "-info" option :

                      Code:
                         if (printInfo) {
                            printf("GL_RENDERER   = %s\n", (char *) glGetString(GL_RENDERER));
                            printf("GL_VERSION    = %s\n", (char *) glGetString(GL_VERSION));
                            printf("GL_VENDOR     = %s\n", (char *) glGetString(GL_VENDOR));
                            printf("GL_EXTENSIONS = %s\n", (char *) glGetString(GL_EXTENSIONS));
                         }

                      Comment


                      • #26
                        Ah, right. Well, that looks almost fine to me except I'm not sure it'd make sense to output extensions by default, that takes quite a lot of space.
                        Just out of interest decided to try what would happen if you removed the benchmark trigger. Apparently whole glxgears breaks down...
                        Apparently the thing that should be used in any case for this kind of stuff is gears, not glxgears. glxgears contains unportable code (and this might not be possible to fix) for calculating fps whereas gears uses glut for gathering necessary information.

                        Comment


                        • #27
                          Originally posted by nanonyme View Post
                          Ah, right. Well, that looks almost fine to me except I'm not sure it'd make sense to output extensions by default, that takes quite a lot of space.
                          Just out of interest decided to try what would happen if you removed the benchmark trigger. Apparently whole glxgears breaks down...
                          Apparently the thing that should be used in any case for this kind of stuff is gears, not glxgears. glxgears contains unportable code (and this might not be possible to fix) for calculating fps whereas gears uses glut for gathering necessary information.
                          Lets jsut start to request distributions to ship gears and instead use tunnel or engine They at least take a bit more rendering power from older hardware. They are useless for benchmarking still but a bit better.

                          Comment


                          • #28
                            Originally posted by amphigory View Post
                            kernelOfTruth, you are one lucky guy to be in Vienna. I'd give my eye teeth to live there... even though I cannot abide Sachertorte.

                            Off-Topic:

                            it's probably only half as great when you're living here - not being here as a tourist - but I'm still loving it

                            there's really a lot to discover in this city

                            I think we'd find something other you'd like than the Sachertorte



                            from what I've seen so far Seattle is also a pretty nice city



                            On-Topic:

                            last time I tried KMS it was significantly slower than non-KMS and still pretty unstable

                            reading latest topics it seems to have stabilized significantly,

                            now I'll have to wait until 2.6.32 gets ready (rc6+) and fglrx support for 2.6.32 so that I can switch between those two in-case it doesn't work too stable/fast yet

                            great work guys !

                            Comment

                            Working...
                            X