Announcement

Collapse
No announcement yet.

ATI R500 Gallium3D Performance In June 2010

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    If this is meant to show how much the Gallium driver has improved, I would think it would be beneficial to put on the graph a curve that represents the performance of the Gallium driver in March (for those of us who are visually inclined).

    And definitely I also think that a catalyst curve would be useful as well, to show how much potential improvement there is left (roughly, recognizing that optimizations get harder to come by the better we get)

    Comment


    • #17
      Originally posted by jrch2k8 View Post
      * some operations could be handle rigth now through llvmpipe until the proper code for the gpu is optimized
      Yeah, let's go with a 10x slower software rasterizer, that'll sure make a huge difference... No, really, you can't beat hw with a software-based solution, even with llvmpipe, no way. The hardware is damned fast. I think we're CPU-limited and I have an idea how to improve it...

      Comment


      • #18
        Using llvmpipe for SW emu in areas is not really feasible on a non-tightly-integrated IGP (i.e. anything that isn't Llano or Sandy Bridge)

        I'm very impressed with how r300g's coming along! Keep up the good work!

        Comment


        • #19
          Also very impressed with the progress

          I'd kind of like to see how Gallium3D comes up against the actual hardware potential, so if you could throw the Windows and Xorg Catalyst drivers in the comparison, define the best performance of any driver at every resolution to be the estimated hardware capacity and define Gallium3D performance in terms of a percentage, that'd be cool.

          Comment


          • #20
            Originally posted by Sacha View Post
            Considering FPS of ~100 - 110 it would be safe to assume VSync is off.
            Vsync wouldn't make Gallium stick to 20 FPS either.
            Aren't the fps's with VSync 120, 60 and 30? (or am I recalling completely wrong?)

            Comment


            • #21
              Wait... people still use r500?

              Comment


              • #22
                Originally posted by nanonyme View Post
                Aren't the fps's with VSync 120, 60 and 30? (or am I recalling completely wrong?)
                VSync would lock the maximum fps to the HZ of you monitor. On an LCD that would most likely be 60.

                Comment


                • #23
                  Yep, and if the system can't keep up with 60 Hz you would then get locked to a frequency which was 1 / (N * 1/60)) so 60, 30, 20, 15, 12, 10 etc... as a result of waiting until the next Vblank.

                  Of course instantaneous frame rates vary, so you might not see one of those exact numbers on average, ie you could be jumping between 20 and 30 Hz.

                  Comment


                  • #24
                    Since the classic graphs are gpu bound, and the gallium ones cpu bound, how can there be such a huge difference in 1920x1080 fps?

                    I mean, how can the gpu bound driver (taking full advantage of it) have that much lower fps than the cpu-bound one. Is the classic arch that bad?

                    Comment


                    • #25
                      The classic driver doesn't take full advantage of the hardware, neither r300g does but it's further than classic. The hardware has much more to offer in terms of performance than any of the open drivers implement.

                      Comment


                      • #26
                        Marek, any idea of the time frame it will take to use all of the hardware? It's somewhat distressing now that it seems you're the only dev working on r300g. Corbin's disappeared and I don't see commits from AMD's employees on mesa.

                        Comment


                        • #27
                          You might want to ask a slightly different question. It might take 5,000 years to use *all* the hardware but the rate of improvement is still pretty significant. Think about something like half-life - every N months half of the remaining "unused stuff" is dealt with, but the process may go on for years.

                          You'll see AMD employees on Mesa again as soon as we start pushing out code for Evergreen, btw.

                          Comment


                          • #28
                            what i wonder about is:
                            what if every (or every second) generation of amd cards needs extra code like evergreen?
                            there would be more and more different generations but developers are still working on r300-r700 & evergreen

                            does anybody know if its going to be like this or was this just a single case?
                            (sorry if its kind of a stupid question)

                            Comment


                            • #29
                              don't worry, ATI will not spend 5.000 years optimizing r300 Their OS initiative is much newer than r300, so they had a lot of catch-up to do to support these older chips.

                              The evergreen programming model will likely remain similar for a while (maybe until DirectX12?), if everything goes according to plan the older generations should be reasonably well supported by then, so ATI can focus on the newer cards right from the start.

                              Comment


                              • #30
                                There's no real pattern for how often the underlying architecture changes, but the saving grace is that designing an all-new architecture is godawful expensive so it normally doesn't happen every year

                                R100, R200 and R300 were all pretty different from each other. R400 was fairly close to R300, but R500 was a bigger jump because the pixel shader block (aka "US") changed significantly.

                                R600 started an all-new architecture with unified shaders - that probably required more work than any previous generation. Fortunately R7xx was *very* similar from a programming POV so we were able to work on both at the same time. The changes for Evergreen aren't *that* big but a couple of other things happened at the same time :

                                - we decided to have Richard push the 6xx/7xx 3D driver to support GL2 and GLSL so we could see how many new applications would start to work

                                - since KMS was now in place, Alex spent time implementing a new set of power management code in the kernel driver

                                Both of these tasks arguably slowed down the availability of acceleration code for Evergreen, but they are "one time" delays which won't apply to future generations.

                                If you look at the "big picture" you'll see that the time between launching new hardware and availability of open source driver support (including 3D acceleration) has been going down every year and I expect that will continue to happen :

                                r3xx/4xx - launched 2002-2003, support in 2006 maybe ? (3-4 yrs)
                                r5xx - launched 2005-2006, support in 2008 (2-3 yrs)
                                r6xx - launched 2007, support in 2009 (~2 yrs)
                                r7xx - launched 2008, support in 2009 (~1.5 yrs)
                                Evergreen - launched 2009, support in 2010 (should be <1 yr)

                                etc...

                                Comment

                                Working...
                                X