Announcement

Collapse
No announcement yet.

Radeon Gallium3D Still Long Shot From Catalyst

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by log0 View Post
    Btw I think one could actually use api traces of games as benchmarks. This would additionally ensure that the same call paths are executed, no fall-backs or workarounds for specific hardware taken.
    Regrettably, this doesn't work. When you profile an apitrace replay, you find that a huge portion of the profile is simply apitrace parsing the multigigabyte trace file.

    Comment


    • #12
      Originally posted by Qaridarium
      LOL you always can buy a faster card but you can't buy a open catalyst
      Sure you can, it just costs about $10 bazallion.

      Comment


      • #13
        I think it's very promising. The Xonontic benchmarks are very pleasing.

        I am guessing from the benchmarks is that there is still some stuff falling back to software that is killing performance for certain things. With some optimization to applications and filling in some missing pieces in the drivers and we are golden. Once open source gets within about 70-80% of proprietary then I'd call it success.

        Comment


        • #14
          Originally posted by mattst88 View Post
          Regrettably, this doesn't work. When you profile an apitrace replay, you find that a huge portion of the profile is simply apitrace parsing the multigigabyte trace file.
          Hmm, I've got a couple traces from games and my own stuff(20-70fps, 100-400MB). And they take about the same time.

          Just did a quick run with vdrift about 2min, 130MB trace. Frame rate without tracing is about 22fps, with tracing 17fps, retracing 15fps(68% of original fps). Are my results atypical?

          As I see it, the slowdown would be the same for all benchmarked cards and we are interested in the relative performance only.

          Comment


          • #15
            Buy lots of RAM and store the apitrace in a Snappy or LZ4-compressed ramdisk That should provide for a faster load time for the apitrace...

            Comment


            • #16
              Originally posted by drag View Post
              I think it's very promising. The Xonontic benchmarks are very pleasing.

              I am guessing from the benchmarks is that there is still some stuff falling back to software that is killing performance for certain things. With some optimization to applications and filling in some missing pieces in the drivers and we are golden. Once open source gets within about 70-80% of proprietary then I'd call it success.
              Indeed. The thing about open source drivers is that they can be debugged. It is possible to find out where they are slow, and then further optimise those parts of the code.

              After adding HiZ and doing some further chasing down of performance bottlenecks in the open source code, performance can be expected to reach perhaps 80% of the closed binary drivers. Since almost no-one needs 200 fps performance, and the difference between 160 fps and 200 fps is all but imperceptible anyway, the perfromance issue with open source drivers will essentially be solved.

              Comment


              • #17
                Bridgman, given than GCN moved to hardware scheduling, I assume lacking an advanced compiler in Mesa becomes less of a bottleneck. How would you estimate the effect of that move?

                E.g. do you see GCN cards getting to 80% of catalyst, where earlier can get 70% etc?

                Comment


                • #18
                  Yeah, I don't have any real numbers but from a pure shader compiler POV my guess is that half the gap between open source and proprietary driver might go away with GCN.

                  For compute the impact will probably be even greater (since graphics is naturally short-vector work while compute is naturally scalar). We're also picking up some compiler improvements at the same time by using LLVM, so it could get interesting.

                  The bigger question is how much of the performance delta today comes from shader compiler rather than things like HyperZ, since the impact of both of them increase with display resolution.
                  Last edited by bridgman; 24 March 2012, 12:03 PM.
                  Test signature

                  Comment


                  • #19
                    I also think the opposite will happen with nouveau/kepler, because they removed hw scheduling there. Half the fps on a newer gen card there, on a shader-heavy workload?

                    Comment


                    • #20
                      I've always wondered why ATI/AMD doesn't just hire an additional five developers for OSS development. I'd assume that it would take 6 months of training to get them to the point where they could produce something useful, but we'd see real results by the end of a year, and have a performant replacement for Catalyst in two years.

                      JB,

                      What's the deal with that? $750k buys a team for two years. Does the revenue from Linux-related-sales not justify the cost? (admittedly, I have no idea how much of AMDs revenue is generated via linux-related sales, no do I understand how your SD org is run). I do know that disappointed customers are far less likely to make subsequent purchases, so this is probably something that should have been done a couple years ago, when gallium was coming about.

                      On a slightly related note, I'm a bit disheartened to see everyone working so hard on legacy technology. I really though that we would all have 10-bit/chan monitors by now. I really thought that we would all have ray-tracing now. I really thought that 'everyone' would be able to play back a 1080p Main-Profile H264 file by now. Even if I had one of the dozen 10-bit/chan panels on the market, I doubt that I'd be able to drive the thing with X/Mesa (I could be totally wrong). I don't want to diminish the efforts of everyone working radeon, but when the next CG generation or innovation becomes mainstream, we're going to be back at the starting line again.

                      What a strange world we live in.

                      F

                      Comment

                      Working...
                      X