Announcement

Collapse
No announcement yet.

ATI R500 Gallium3D Performance In June 2010

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Wait... people still use r500?

    Comment


    • #22
      Originally posted by nanonyme View Post
      Aren't the fps's with VSync 120, 60 and 30? (or am I recalling completely wrong?)
      VSync would lock the maximum fps to the HZ of you monitor. On an LCD that would most likely be 60.

      Comment


      • #23
        Yep, and if the system can't keep up with 60 Hz you would then get locked to a frequency which was 1 / (N * 1/60)) so 60, 30, 20, 15, 12, 10 etc... as a result of waiting until the next Vblank.

        Of course instantaneous frame rates vary, so you might not see one of those exact numbers on average, ie you could be jumping between 20 and 30 Hz.
        Test signature

        Comment


        • #24
          Since the classic graphs are gpu bound, and the gallium ones cpu bound, how can there be such a huge difference in 1920x1080 fps?

          I mean, how can the gpu bound driver (taking full advantage of it) have that much lower fps than the cpu-bound one. Is the classic arch that bad?

          Comment


          • #25
            The classic driver doesn't take full advantage of the hardware, neither r300g does but it's further than classic. The hardware has much more to offer in terms of performance than any of the open drivers implement.

            Comment


            • #26
              Marek, any idea of the time frame it will take to use all of the hardware? It's somewhat distressing now that it seems you're the only dev working on r300g. Corbin's disappeared and I don't see commits from AMD's employees on mesa.

              Comment


              • #27
                You might want to ask a slightly different question. It might take 5,000 years to use *all* the hardware but the rate of improvement is still pretty significant. Think about something like half-life - every N months half of the remaining "unused stuff" is dealt with, but the process may go on for years.

                You'll see AMD employees on Mesa again as soon as we start pushing out code for Evergreen, btw.
                Test signature

                Comment


                • #28
                  what i wonder about is:
                  what if every (or every second) generation of amd cards needs extra code like evergreen?
                  there would be more and more different generations but developers are still working on r300-r700 & evergreen

                  does anybody know if its going to be like this or was this just a single case?
                  (sorry if its kind of a stupid question)

                  Comment


                  • #29
                    don't worry, ATI will not spend 5.000 years optimizing r300 Their OS initiative is much newer than r300, so they had a lot of catch-up to do to support these older chips.

                    The evergreen programming model will likely remain similar for a while (maybe until DirectX12?), if everything goes according to plan the older generations should be reasonably well supported by then, so ATI can focus on the newer cards right from the start.

                    Comment


                    • #30
                      There's no real pattern for how often the underlying architecture changes, but the saving grace is that designing an all-new architecture is godawful expensive so it normally doesn't happen every year

                      R100, R200 and R300 were all pretty different from each other. R400 was fairly close to R300, but R500 was a bigger jump because the pixel shader block (aka "US") changed significantly.

                      R600 started an all-new architecture with unified shaders - that probably required more work than any previous generation. Fortunately R7xx was *very* similar from a programming POV so we were able to work on both at the same time. The changes for Evergreen aren't *that* big but a couple of other things happened at the same time :

                      - we decided to have Richard push the 6xx/7xx 3D driver to support GL2 and GLSL so we could see how many new applications would start to work

                      - since KMS was now in place, Alex spent time implementing a new set of power management code in the kernel driver

                      Both of these tasks arguably slowed down the availability of acceleration code for Evergreen, but they are "one time" delays which won't apply to future generations.

                      If you look at the "big picture" you'll see that the time between launching new hardware and availability of open source driver support (including 3D acceleration) has been going down every year and I expect that will continue to happen :

                      r3xx/4xx - launched 2002-2003, support in 2006 maybe ? (3-4 yrs)
                      r5xx - launched 2005-2006, support in 2008 (2-3 yrs)
                      r6xx - launched 2007, support in 2009 (~2 yrs)
                      r7xx - launched 2008, support in 2009 (~1.5 yrs)
                      Evergreen - launched 2009, support in 2010 (should be <1 yr)

                      etc...
                      Test signature

                      Comment

                      Working...
                      X