Announcement

Collapse
No announcement yet.

R600 Open-Source Driver WIth GLSL, OpenGL 2.0

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    Originally posted by rbmorse View Post
    I really think you should be more concerned about the possibility that AMD will announce in the not too distant future that they've done what they promised...delivered as much documentation as they think they get away with and still be clean with regard to their other obligations, delivered a working example of a more or less fully functioned driver for each of their reasonably current hardware families, and are now turning over responsibility for the continuing development of the open source driver to the "community" so they can redirect their resources to development of the workstation market, where they actually make some money.

    Be afraid. Be very afraid.
    I think that's what we are doing today. We focus on documentation, developer support, and implementing sample code in areas where even having documentation may not be enough to work efficiently, eg initial support for new GPUs. What you are describing is really what we are doing today -- we're not really driving things like KMS (although Alex did most if the implementation for pre-5xx).

    Comment


    • #62
      Originally posted by Qaridarium View Post
      Old mesa style only brings 50% of the fglrx speed. the opensource driver needs newstyle..... Galium3D+OpenGl3stage+LLVM compiler brings full speed! without galium3D+LLVM you'll never get the full speed.
      I agree that it's likely to be the post-Gallium3D stack that sees most of the performance work, but be aware that "LLVM as shader compiler" hasn't really worked out so far. On the other hand, LLVM is turning out to be very useful for generating CPU code (eg running vertex shaders on the CPU for no-TCL parts or for CPU/GPU load-balancing) and for higher level operations like turning OpenCL into TGSI (ie LLVM running *above* the Gallium3D/TGSI API.

      The main issue is that LLVM doesn't currently support VLIW hardware, and all of the Gallium3D-eligible ATI hardware is VLIW -- either 2-operation per instruction (3xx-5xx) or 5-operation per instruction (6xx and higher).

      Comment


      • #63
        Originally posted by bridgman View Post
        I agree that it's likely to be the post-Gallium3D stack that sees most of the performance work, but be aware that "LLVM as shader compiler" hasn't really worked out so far. On the other hand, LLVM is turning out to be very useful for generating CPU code (eg running vertex shaders on the CPU for no-TCL parts or for CPU/GPU load-balancing) and for higher level operations like turning OpenCL into TGSI (ie LLVM running *above* the Gallium3D/TGSI API.

        The main issue is that LLVM doesn't currently support VLIW hardware, and all of the Gallium3D-eligible ATI hardware is VLIW -- either 2-operation per instruction (3xx-5xx) or 5-operation per instruction (6xx and higher).
        thank you for the information i realy dosn't know that.

        the open-source driver really needs a 'shader-compiler' for 'VLIW-hardware'
        and if LLVM not ready yet then we need another one or an 'shader-compiler for VLIW-hardware' improvement for the LLVM.

        if not the opensource driver will never ever compare to fglrx on openGL speed because the FGLRX have a 'shader-compiler'.

        i realy sould call RMS to solve the problem ;-)

        Comment


        • #64
          The good news is that nha's shader compiler for 3xx-5xx is already quite a bit better than what we assumed when making our initial performance estimates for the open source driver. The compiler in the 6xx-7xx driver is not as sophisticated, but the 7xx parts in particular have a lot of shader power so I don't expect the shader compiler to be the bottleneck for another year or so.

          The "low-hanging fruit" for performance will probably be (a) reducing the amount of state information sent with every command packet, (b) improving the degree to which the driver code and GPU can operate in parallel, and (c) doing a bit of profiling and fixing/optimizing anything which jumps out as taking much more time than it should.

          Comment


          • #65
            Originally posted by bridgman View Post
            The good news is that nha's shader compiler for 3xx-5xx is already quite a bit better than what we assumed when making our initial performance estimates for the open source driver. The compiler in the 6xx-7xx driver is not as sophisticated, but the 7xx parts in particular have a lot of shader power so I don't expect the shader compiler to be the bottleneck for another year or so.

            The "low-hanging fruit" for performance will probably be (a) reducing the amount of state information sent with every command packet, (b) improving the degree to which the driver code and GPU can operate in parallel, and (c) doing a bit of profiling and fixing/optimizing anything which jumps out as taking much more time than it should.
            Every now and then I get a sneaking suspicion that AMD has created a bridgman chat bot, with all the questions you seemingly answer over and over and over again. But then you come out with a good post like this and I'm reminded how awesome it is that AMD has you communicate with the community. I don't always agree, but hearing AMD's position is always interesting and in stark contrast to the wall of silence we get from nvidia.

            I'm a little concerned about the r600 comiler, because last i saw on the mailing lists someone had come to the conclusion that it needed to be completely rewritten from the ground up and no one was volunteering or made it seem like that was going to happen. The argument was that the current compiler was vector-based and this guy thought it should be switched to a different architecture while that was still possible before it gets too difficult to fix.

            Comment


            • #66
              That was probably nha or MostAwesomeDude talking about rewriting the compiler, and I expect they are right about it needing a rewrite at some point. The current compiler is arguably more of a translator in the sense that it works on one IL instruction at a time, but even so I expect it will be fine for a year or two. Fortunately most of the IL instructions *are* vector operations, and those give a decent level of shader utilization even without combining multiple IL instructions into a single shader instruction word.

              The current driver architecture should also allow a new compiler to plug in with relatively little effort -- if we get to the point where the driver architecture is becoming more tightly tied to the compiler internals then that should be a warning sign that it's time to get on with redesigning the compiler.

              That said, there are relatively few shader-bound apps running on Linux today, even under Wine, so the current compiler will do the job for longer than most people expect. There are some DX9 games with relatively long shaders (50+ instructions per pixel) but even those tend to have a fairly high percentage of vector instructions. As long as that continues to be the case even an ideal shader compiler is likely to raise performance in shader-bound apps by no more than 30% over the current compiler in most cases.

              A better shader compiler will definitely help with the low-end parts, but once you get up to the 8- or 10-SIMD parts (eg HD46xx or higher) the per-clock shader-to-pixel ratio is high enough that you're likely to be ROP- or memory-limited anyways. It's the 610, 620 and IGP parts that will be first to benefit from a better shader compiler, at least when running newer games.
              Last edited by bridgman; 12-23-2009, 12:53 AM.

              Comment


              • #67
                Originally posted by bridgman View Post
                The good news is that nha's shader compiler for 3xx-5xx is already quite a bit better than what we assumed when making our initial performance estimates for the open source driver.
                good for poor people yes. in my 'point of view' i can not see any future in the 3xx-5xx cards...
                i have some of this cards in use yes and they works very well.

                'but' for the same a 4350 'for example' has much more power efficiency

                and for 'wine' OpenGL3.2 is the 'minimum' but on 5xx hardware can handle openGL2.1 but openGL2.1 is only usefull with nvidia extansions (wine only statement).

                game engines like Xreal(OpenGL3+) shows me the old way is to limitet by PCIe1.0 (my workstation do not have 2.0) and Single-theatet-CPU power (my workstation can have 12cores but on the oldway you need Overclocked4ghz intel cpu)
                engines FBO based like Xreal (OpenGL3) reduse the CPU need to an minimum.

                so in my poor point of view there is no way around everyone sould buy a R600+ card.


                Originally posted by bridgman View Post
                The compiler in the 6xx-7xx driver is not as sophisticated, but the 7xx parts in particular have a lot of shader power so I don't expect the shader compiler to be the bottleneck for another year or so.
                yes i never ever has any speed issues with my 320SPe 4670 on the opensource driver......
                'But' to run real complex games/programmes there is an minimum openGL2.1(nativ games)/gl3.2 (wine)
                the future will show us this.


                Originally posted by bridgman View Post
                The "low-hanging fruit" for performance will probably be (a) reducing the amount of state information sent with every command packet, (b) improving the degree to which the driver code and GPU can operate in parallel, and (c) doing a bit of profiling and fixing/optimizing anything which jumps out as taking much more time than it should.
                very nice information.

                but I'm a waste of time.
                first i should learn how to program (C,asm,c++)
                and yes more english Exercises.

                Comment


                • #68
                  quaridarium, not everyone is only interested in wine games. lots of people have an old laptop sitting around with an X1600 card in it, that they would like to use with some more basic 3D stuff but can't justify going out to buy a new laptop.

                  Also, a lot of the wine devs quotes about how they require nvidia extensions is because that's the driver they developed for - if they spent the time to do it, i'm sure they could get it running on the equivalent ati extensions, but no ones really going back to do that work because the hope is that they can just require GL3.2 and forget about older cards that never worked well anyway.

                  Comment


                  • #69
                    AMD will soon deliver open graphics drivers, said Henri Richard just a few minutes ago, and the audience at the opening keynote of the Red Hat Summit broke into applause and cheers. Richard, AMD’s executive vice president of sales and marketing, promised: “I’m here to commit to you that it’s going to get done.” He also promised that AMD is “going to be very proactive in changing way we interface with the Linux community.”

                    The open sourcing of graphics drivers will indeed be good news, but it’s not a big surprise. After AMD acquired graphics driver maker ATI last year, an announcement that AMD would be opening up graphics drivers has been anticipated. The other shoe has dropped, and the folks at the Summit in San Diego are very happy. Now, the new question is “when?Richard didn’t say.
                    MAY 9 2007 12:06PM GMT itknowledgeexchange.techtarget.com
                    http://itknowledgeexchange.techtarge...phics-drivers/
                    Last edited by barbarbaron; 12-23-2009, 06:01 AM.

                    Comment


                    • #70
                      2007 is when ATI restarted active involvement in open source graphics. I was hired in 2007.

                      Comment


                      • #71
                        Originally posted by agd5f View Post
                        I was hired in 2007.
                        LOL who cares about your job?

                        Comment


                        • #72
                          Originally posted by barbarbaron View Post
                          LOL who cares about your job?
                          Hired by AMD to work on open source graphics.

                          Comment


                          • #73
                            Originally posted by barbarbaron View Post
                            LOL who cares about your job?
                            @Michael: Can this user please be banned?

                            Comment


                            • #74
                              Some people seem to have some difficulties understanding that the whole X infrastructure has been largely rehauled these last years and I guess it's kind of a moving target for drivers devs.
                              The choice of supporting OSS drivers by AMD has allowed them also to work on the Mesa 3D Stack, improving the whole infrastructure for everyone. I guess AMD could have decided just to concentrate their development effort to the binary blob and like Nvidia, only just provide their OpenGL stack as binary only, completely replacing the OSS one, and I suppose, in that case, that they could have continued to support older cards, not having to pay people to write OSS software.
                              I appreciate the fact that they decided to do otherwise and to help improve the whole OSS graphic stack.
                              I also appreciate AMD devs presence on this forum : it's very nice to have inside news. :-)

                              Comment


                              • #75
                                Originally posted by bridgman View Post
                                If you own 3xx-5xx hardware (you have X1950 ?) you should get familiar with building and running the Gallium3D driver - it's not ready for general use yet but it's making good progress and already has GLSL and GL 2.1 enabled. At minimum you should be monitoring the progress but it wouldn't hurt to try it out periodically. You'll need a new kernel with KMS enabled, don't remember if you are already running KMS.
                                The Gallium3D driver certainly has its flaws but it's getting to the point where it might be usable for some games. Compiz works for some time and even with some advanced effects it's completely stable and smooth. xmoto and openarena are playable, neverball too if you get lucky, nexuiz would work too if some non-driver-related bugs were fixed. There is a lot of work going on in Gallium3D that some features may stop or start working from time to time.

                                The hopefully complete TODO list for r300g can be found here in the Gallium section:
                                http://dri.freedesktop.org/wiki/R300ToDo
                                It's frequently updated so you might get a pretty accurate idea about the current state. 2 or 3 features from that list are performance improvements. If we implemented them, the performance will get back on top.

                                ~ Marek

                                Comment

                                Working...
                                X