Announcement

Collapse
No announcement yet.

R600 Gallium3D Disables LLVM Back-End By Default

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by Ericg View Post
    LLVM brings to the table quick compilation
    It's slower to compile shaders than SB. Being faster than GCC doesn't really mean much

    Comment


    • #22
      Again: SB can be enabled with LLVM, too. Why is everybody, including AMD/Xorg developers ignoring this and telling SB is faster than LLVM? And if you think it can't be enabled with LLVM, why are there bug reports for the LLVM backend showing differences with and without SB (like this: https://bugs.freedesktop.org/show_bug.cgi?id=76954 ). Again: IIRC SB takes place after the shader has been compiled. It doesn't matter if the shader has been compiled by r600gs build-in compiler or LLVM.

      Comment


      • #23
        In the long term, it doesn't make sense to use both LLVM and SB backends together, thus the option is not even considered. Both of them do a similar set of optimizations, and most importantly, the latter backend reverses the effect of some optimizations done by the former backend, so a lot of time spent in LLVM is for nothing.

        Comment


        • #24
          marek: Thanks for explaining, that makes sense. May I ask what's the long-term plan: To use the build-in compiler or LLVM?

          Comment


          • #25
            Originally posted by TAXI View Post
            marek: Thanks for explaining, that makes sense. May I ask what's the long-term plan: To use the build-in compiler or LLVM?
            For SI and up LLVM.
            For R600, LLVM for OpenCL, build in compiler for OpenGL.

            Comment


            • #26
              Originally posted by agd5f View Post
              For SI and up LLVM.
              For R600, LLVM for OpenCL, build in compiler for OpenGL.
              Hey Alex, totally unrelated to the LLVM topic but I wanted to catch you while I could...


              Last night my 7850K came and I was playing around with it on a Fedora 20 x64 hard drive and I noticed something possibly odd. In actually released code, what version of OpenGL should RadeonSI be up to?

              Latest Fedora 20 updates are:

              Mesa 10.0.4
              LLVM 3.3.4
              Kernel 3.13.10
              X Server 1.14.4

              glxinfo 1) Crashed X & kwin, and 2) reported GLSL 1.30 or 1.50 I forget which, but also reported OpenGL 2.1

              Vendor was listed as AMD Kaveri, direct rendering was yes. Like nothing shouted "Youre on software rendering!" but I would've expected at least OpenGL 3.1 to be listed, not 2.1.

              Later that night I wiped the drive to test out windows (Sorry >.> It was my only spare drive) but does that seem "right" to you?
              All opinions are my own not those of my employer if you know who they are.

              Comment


              • #27
                Originally posted by Ericg View Post
                Vendor was listed as AMD Kaveri, direct rendering was yes. Like nothing shouted "Youre on software rendering!" but I would've expected at least OpenGL 3.1 to be listed, not 2.1.
                Try updating your version of mesa. I think you need 10.1 and llvm 3.4 for OpenGL 3.1 support on radeonsi. For OpenGL 3.3 support you need mesa from git and llvm 3.5.

                Comment


                • #28
                  Originally posted by Ericg View Post
                  LLVM 3.3.4
                  I know I'm not Alex, but I think you need a higher LLVM version (maybe a higher Mesa version, like 10.2-devel may help, too. With both up2date you should get OpenGL 3.3).

                  //EDIT: Alex was faster than me.

                  Comment


                  • #29
                    Originally posted by agd5f View Post
                    Try updating your version of mesa. I think you need 10.1 and llvm 3.4 for OpenGL 3.1 support on radeonsi. For OpenGL 3.3 support you need mesa from git and llvm 3.5.
                    Okay, I was just curious. All the proper parts should be coming in within a couple weeks so i'll retest thing. Thanks for the response.
                    All opinions are my own not those of my employer if you know who they are.

                    Comment


                    • #30
                      Originally posted by marek View Post
                      In the long term, it doesn't make sense to use both LLVM and SB backends together, thus the option is not even considered. Both of them do a similar set of optimizations, and most importantly, the latter backend reverses the effect of some optimizations done by the former backend, so a lot of time spent in LLVM is for nothing.
                      Honestly, this option probably makes sense for the apps that hit register allocation issues with stock backend. LLVM can solve some issues of that kind. On the other hand, I really hoped that LLVM backend will get enough attention and will become the main backend in the end. Now it looks like it's abandoned. That's a bit unexpected for me.
                      I'm still considering support of SB, adding geometry shaders etc, but I really have no time. Anyway I tried hard to warn about it before upstreaming it. But I think it still works in most of the cases and improves performance, and it's exactly what people want.
                      FWIW, in some tests LLVM may improve the performance, but in most tests LLVM made the code less optimizable just because some heuristics in SB were tuned for the stock backend, and LLVM was hiding the optimization opportunities. There were also other LLVM codegen problems that resulted in asserts in SB because SB checks correctness of the source code. Anyway. I always suggested not to use LLVM with SB because it was less tested and less stable.
                      There is a branch in my repo that solved register allocation issues with SB by removing stock r600 backend from the equation and introducing direct translator from TGSI to SB, but no one was interested enough:

                      It may be updated to replace the default backend with minimal efforts (well, geometry shaders will need some work).

                      Honestly, I don't like to push my solutions, so if no one wants it, I don't care.
                      1) I'll try to improve it to support geom shaders etc, but if no one else is interested... I've switched to SI card for gaming etc, so I'm not very interested in working on R600.
                      2) If anyone is interested to support it, I'll be happy to provide all the help I can.
                      3) supporting a backend for 4 generations of R600 cards was a huge trouble, because I had only one card. Great thanks to the people from all over the world, they tested it and sent me debug reports, tested the patches and sent me the logs again, and so on. But making it work with all chips took more time than initial implementation. I simply have no time right now to test and fix anyt improvements for all other cards. I'm not AMD.
                      4) people are asking me about the backend for SI. I believe LLVM compiler works a lot better for SI than it was for R600 VLIW architecture, that's why I'm not really sure if custom compiler may help with newer cards. I'll probably look into it if I'll have time and if it will be obvious that SI compiler is not very efficient, but so far it's not the case.
                      5) the apps that are failing due to register allocation issues with SB are mostly not optimized at all. AAA apps are working fine, but some crazy shaders in crazy apps are failing because they are really crazy. Please ask game developers to optimize them. I can help if it's needed, so feel free to give them my email.

                      Comment

                      Working...
                      X