Announcement

Collapse
No announcement yet.

ATI's Gallium3D Driver Is Still Playing Catch-Up

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    I think Tormod just posted a link to a packaged build on another thread. I have to run right now but will look for it later if you don't find it.

    Comment


    • #17
      Here you go :

      Originally posted by tormod View Post
      xorg-edgers to the rescue
      http://launchpad.net/~xorg-edgers/archive/radeon
      Currently, these mesa packages install on plain lucid (and not on top of the main xorg-edgers PPA), but since this might change later, please check the PPA page description before upgrading from it or asking questions.

      ...

      (great, 1 minute edit window and brain-dead mobile browser)
      Correct URL: https://launchpad.net/~xorg-edgers/+archive/radeon

      Comment


      • #18
        wow thanks! is that lonely mesa package in that repo the only thing needed to install the driver?

        Comment


        • #19
          The main r300g->r300c gap closer was reworking the vertex/index buffer submission to be a lot more efficent like the r300c code. I think there
          is probably a still bit more upside here, but I haven't had a chance to find it yet

          I think the final speeds up in order of when they'll get done look like won't be driver code optimisations as much as gpu feature usage:

          1. Color tiling - need to enable by default and fix regressions
          2. Z clears
          3. Color buffer fastfill clears
          4. Proper HIZ support.
          5. Better GLSL compiler. Intel are working on a new GLSL compiler front end, that + r300g GSOC project should produce something a fair bit nicer.

          Dave.

          Comment


          • #20
            Hi

            I have the same Laptop (Lenovo T60)
            and tried the gallium 3D ppa,
            works perfectly well with Warcraft 3 (Wine)
            keep on the good work

            greetings

            Comment


            • #21
              The Warsow benchmark alone is worth a party. Thanks, guys. You are the best.

              Comment


              • #22
                What impressed me the most is how consistent results were.
                Well ok the graphs were mostly flat too however the test showed no real trade-offs. Is it because of the panel of tested apps?

                Comment


                • #23
                  Yey! In another 5 years, we will have 3D drivers worthy of that name for my HD4870. Can't wait

                  Comment


                  • #24
                    What about adding general apps instead of so many games? Blender, Wings, Meshlab... pick some big 3D models, and check how many FPS the app can handle while displaying those, or pick a FPS limit like 25 and check how many polygons until the FPS counter goes below that. Or even if the drivers support all the features the apps request, or have to use software fallbacks (radeon and stipples anyone?).

                    Some of those apps even include benchmarks already, and they can also be used to do rendering tests, not just OpenGL. Generic tests (not this article) could include rendering workloads with Aqsis, POVRay...

                    Yeah, some of us don't focus on games.

                    Comment


                    • #25
                      Originally posted by bugmenot View Post
                      radeon and stipples anyone?
                      No, r300g doesn't support stippling and it behaves like stippling is always disabled. Do you need it? There are no software fallbacks (and we don't plan any) so if a feature works, it runs at full speed.

                      Comment


                      • #26
                        Performance wise, one of the most appealing traits of Gallium3D was (Or is) the use of LLVM as a Shader optimizer.But it isn't, as far as i know, something that is being used as for now.

                        I did read something about implementing it, its shortcomings and things that had to be worked around to use it. So i wanted to ask, and everyone who cares about performance ( I certainly do) should want to know, how is that being solved for R300g?

                        I see great potential for it, Clang (Based on LLVM) compiles short sized programs (Larger than a shader, of course) in 5 seconds with speed equal or greater than a more recent GCC. Feasible i guess, for shaders and the likes. And I think LLVM is capable of doing better optimizations than ATI FGLRX driver, wouldn't it?

                        So my question is, How Implementing LLVM is being considered. and how could it affect performance? Bridgman and Marek, care to elaborate?

                        Comment


                        • #27
                          Originally posted by WillyThePimp View Post
                          Performance wise, one of the most appealing traits of Gallium3D was (Or is) the use of LLVM as a Shader optimizer.But it isn't, as far as i know, something that is being used as for now.

                          I did read something about implementing it, its shortcomings and things that had to be worked around to use it. So i wanted to ask, and everyone who cares about performance ( I certainly do) should want to know, how is that being solved for R300g?

                          I see great potential for it, Clang (Based on LLVM) compiles short sized programs (Larger than a shader, of course) in 5 seconds with speed equal or greater than a more recent GCC. Feasible i guess, for shaders and the likes. And I think LLVM is capable of doing better optimizations than ATI FGLRX driver, wouldn't it?

                          So my question is, How Implementing LLVM is being considered. and how could it affect performance? Bridgman and Marek, care to elaborate?
                          No. Not worth it for r300. We already have nearly optimal shader compilation in terms of performance.

                          Comment


                          • #28
                            Originally posted by marek View Post
                            No, r300g doesn't support stippling and it behaves like stippling is always disabled. Do you need it? There are no software fallbacks (and we don't plan any) so if a feature works, it runs at full speed.
                            Yes, it's needed by programs like Blender, that uses stippled lines to show different types of things in the 3D view. I know there are software fallbacks in r300classic, and when disabled, the interface is unable to provide full information to the user, it really matters. And the same happens in other CAD programs.

                            Please, consider games and a handful of minor apps aren't what drivers should only work with, try "serious work" apps too like 3D apps, CAD and similar things. My impression (or should I say experience?) is that games go just for candy, and work apps are the ones that really show the quality of drivers and cards, as they cover all fields (Blender has GLSL, eg), and can push a computer to the limit (they will not whine if your model has 1M polys... your computer and card will first). They also do other things like select operations that hardly or never happen in games. Etc, etc...

                            If you don't know how to use those programs, ask their communities, we will gladly help get the drivers at usable level. Thanks for the work up to now, in any case.

                            Comment


                            • #29
                              Originally posted by WillyThePimp View Post
                              So my question is, How Implementing LLVM is being considered. and how could it affect performance? Bridgman and Marek, care to elaborate?
                              The back end of LLVM didn't seem to be a good fit for architectures where a single instruction word included multiple instructions, which is the case for 3xx-5xx (vector + scalar) and for 6xx+ (up to 5 scalar instructions).

                              As MostAwesomeDude said, the existing shader compiler in the 300 and 300g driver seems pretty good - might not be as good as the one in fglrx but my guess is that it's pretty close.

                              Right now there are no signs that the shader compiler is the bottleneck. The performance graphs indicate that the driver is CPU limited quite a bit of the time, and as airlied said there are also some GPU optimizations still to be done.

                              LLVM may turn out to be more useful for single-instruction architectures like the Intel and NVidia GPUs, not sure.

                              Comment


                              • #30
                                Originally posted by bugmenot View Post
                                Yes, it's needed by programs like Blender, that uses stippled lines to show different types of things in the 3D view. I know there are software fallbacks in r300classic, and when disabled, the interface is unable to provide full information to the user, it really matters. And the same happens in other CAD programs.

                                Please, consider games and a handful of minor apps aren't what drivers should only work with, try "serious work" apps too like 3D apps, CAD and similar things. My impression (or should I say experience?) is that games go just for candy, and work apps are the ones that really show the quality of drivers and cards, as they cover all fields (Blender has GLSL, eg), and can push a computer to the limit (they will not whine if your model has 1M polys... your computer and card will first). They also do other things like select operations that hardly or never happen in games. Etc, etc...

                                If you don't know how to use those programs, ask their communities, we will gladly help get the drivers at usable level. Thanks for the work up to now, in any case.
                                I do Blender. I'm kind of fail at it, but I can do some simple stuff.

                                The reason that we avoided stipples is a combination of not many apps using it and the fact that we don't have it working in HW yet. I'll put more effort towards it and see if we can get something working soon.

                                As far as large vert counts go, we do just fine on that front; if you've got a real-world problem there, let us know.

                                Comment

                                Working...
                                X