Announcement

Collapse
No announcement yet.

NIR Still Being Discussed For Mesa, LLVM Gets Brought Up Again

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by sthalik View Post
    Unruly??? Have you been snorting blocks?
    wtf have you been snorting?

    by unruly, I mean lack of stable ABI, use of globals and static constructors, etc.. not really the thing that you look for in a dependency.

    Comment


    • #22
      Originally posted by robclark View Post
      wtf have you been snorting?

      by unruly, I mean lack of stable ABI, use of globals and static constructors, etc.. not really the thing that you look for in a dependency.
      Also that the recommended model of linking to it, is to import a copy of the 2,319,451 lines of code into your source tree. Congratulations: you now maintain a fork of a huge compiler, as well as a complete graphics driver stack!

      Comment


      • #23
        I am wondering how much work it would be to hook up radeonsi to Glassy Mesa, skipping their LLVM IR to GLSL IR translation layer, that they are using now for feeding the i965 driver and using LLVM IR directly.

        Comment


        • #24
          Originally posted by daniels View Post
          Also that the recommended model of linking to it, is to import a copy of the 2,319,451 lines of code into your source tree. Congratulations: you now maintain a fork of a huge compiler, as well as a complete graphics driver stack!
          And, well, even if you don't bring it into the mesa src tree (which would at least allow for easier patching if needed), the current recommended way to cope w/ LLVM seems to be to statically link against it.

          btw, I noticed the webkit folks even made a list of why LLVM is an unruly dependency: http://blog.llvm.org/2014/07/ftl-web...bkit-with-llvm .. the situation for a GL driver (ie. mesa) seems like it would be even worse compared to webkit, since GL is itself intended to be linked into applications.

          Anyways.. I do think it would be very useful for many projects to have something like LLVM which was actually sane to use as a library. But afaict that does not exist. So at this point the discussion is mostly about whether LLVM brings enough benefit (especially for GPU's which tend to the more weird end of the spectrum of compiler targets) to justify the pain for developers and end users.

          Comment


          • #25
            Originally posted by robclark View Post
            ... So at this point the discussion is mostly about whether LLVM brings enough benefit (especially for GPU's which tend to the more weird end of the spectrum of compiler targets) to justify the pain for developers and end users.
            It is definitely getting more complex in the future. FPUs were once a separate chip on the board, and now GPUs have made it into the CPUs, too (aka APUs). It is a matter of time until compilers can produce instructions for a CPU and a GPU seamlessly. Regardless if it is for 3D graphics, video compression, cryptography or just for sorting numbers. It is already being done for MMX, SSE and AVX instructions, and the logical conclusion of AVX is basically to let a GPU do it. The hardware is getting ready for it, it is not quite there yet though nor will it be trivial, but LLVM could certainly lead the way for the compiler development. I only do not believe Intel would want to give up control over the software side and to allow a university to tag its name onto their efforts. Anyhow, I can see lots of benefits if all would work together.

            Comment


            • #26
              Originally posted by sdack View Post
              It is definitely getting more complex in the future. FPUs were once a separate chip on the board, and now GPUs have made it into the CPUs, too (aka APUs). It is a matter of time until compilers can produce instructions for a CPU and a GPU seamlessly. Regardless if it is for 3D graphics, video compression, cryptography or just for sorting numbers. It is already being done for MMX, SSE and AVX instructions, and the logical conclusion of AVX is basically to let a GPU do it. The hardware is getting ready for it, it is not quite there yet though nor will it be trivial, but LLVM could certainly lead the way for the compiler development. I only do not believe Intel would want to give up control over the software side and to allow a university to tag its name onto their efforts. Anyhow, I can see lots of benefits if all would work together.
              oh, certainly.. this is why LLVM is so frustrating. I think a lot of us would like to be able to use LLVM more easily

              Comment


              • #27
                Actually, standardized languages are already able to produce code targetting both CPU and GPU.

                See GPGPU in general, OpenCL etc. That lowest common denominator rules apply, is another matter altogether.

                Comment


                • #28
                  IMO LLVM suxx.

                  IMO LLVM just suxx. AMD devs spent over 2 years on their LLVM backend. And where any user-visible results out of all this stuff? On R600g it does not beats old in place shader backend and on RadeonSI it was a well-known, long-standing source of troubles.

                  And if someone thinks LLVM issues are ironed out... okay, THIS IS REALLY WRONG IDEA. You see, LLVM exposes blatant misbehavior on fairly simple opencl kernel, flooding stderr to the hell due to internal breakage and possibly being the reason behind GPU crash. Needless to say, after looking on such "success story" I'm not very enthusiastic about LLVM at all. Basically it being broken for hell knows how many time. In fact LLVM was never working properly for AMD GPUs, especially when it comes to OpenCL. Each and every version exposes one or another kind of breakage. How can piece of software be THAT bugged? O_O
                  Last edited by 0xBADCODE; 29 August 2014, 01:03 AM.

                  Comment


                  • #29
                    Originally posted by daniels View Post
                    Congratulations: you now maintain a fork of a huge compiler, as well as a complete graphics driver stack!
                    No one is talking about a fork. If you don't change anything, it's not a fork - it's a copy.

                    And even if you do maintain a set of local changes, as long as you keep trying to upstream them it's just a branch, which shouldn't be anything overwhelming.

                    *not that i'm saying that would be good for mesa. really, it would be good for drivers that use llvm, which is radeonsi, vmware, and potentially more coming down the pipe (nouveau opencl?). the problem is that it's not particularly good for anyone else.

                    Comment


                    • #30
                      Originally posted by 0xBADCODE View Post
                      IMO LLVM just suxx. AMD devs spent over 2 years on their LLVM backend. And where any user-visible results out of all this stuff? On R600g it does not beats old in place shader backend and on RadeonSI it was a well-known, long-standing source of troubles.

                      And if someone thinks LLVM issues are ironed out... okay, THIS IS REALLY WRONG IDEA. You see, LLVM exposes blatant misbehavior on fairly simple opencl kernel, flooding stderr to the hell due to internal breakage and possibly being the reason behind GPU crash. Needless to say, after looking on such "success story" I'm not very enthusiastic about LLVM at all. Basically it being broken for hell knows how many time. In fact LLVM was never working properly for AMD GPUs, especially when it comes to OpenCL. Each and every version exposes one or another kind of breakage. How can piece of software be THAT bugged? O_O
                      ROFL, that is not a LLVM bug you genius...

                      But keep on with you holy mission to spread FUD about LLVM.

                      Comment

                      Working...
                      X