Announcement

Collapse
No announcement yet.

RadeonSI May Eventually Switch To NIR Completely

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by log0 View Post
    Why are they not translating SPIRV to LLVM IR directly? Weird...
    It's already been explained in this topic once, and in about every other IR-related topic.

    Comment


    • #22
      Originally posted by log0 View Post
      Why are they not translating SPIRV to LLVM IR directly? Weird...
      Optimization pass. Better binary performance. Besides, the driver devs have claimed for years that translating between IR's is very close to free., it doesn't take much time at all. So it's better to take a very slight hit to compile times in order to get better runtime.

      At least that is more or less the gist of it.

      Comment


      • #23
        Originally posted by duby229 View Post

        Optimization pass. Better binary performance. Besides, the driver devs have claimed for years that translating between IR's is very close to free., it doesn't take much time at all. So it's better to take a very slight hit to compile times in order to get better runtime.

        At least that is more or less the gist of it.
        So LLVM own optimization passes suck so much. that it is better to run let NIR optimize the code?

        Comment


        • #24
          Originally posted by geearf View Post

          It's already been explained in this topic once, and in about every other IR-related topic.
          I don't see any explanations, only claims that it is somehow better than using LLVM IR directly.

          Comment


          • #25
            Originally posted by log0 View Post

            So LLVM own optimization passes suck so much. that it is better to run let NIR optimize the code?
            Not that it sucks, it just is a different format. NIR is structured differently , so it allows types of optimization that LLVM isnt suited to.

            Comment


            • #26
              Originally posted by log0 View Post

              I don't see any explanations, only claims that it is somehow better than using LLVM IR directly.
              If you want you can create an OGL or D3D state tracker that directly targets HW. You may gain a 3-5% because of no IR but you cannot reduce the job that must be done to each layer, you will just done it at once. If you do that you will have a big minus: there is no more unification. So if you correct something it doesn't correct for every api or driver implementation. Mesa is faster than windowz in native apps because all apis have a unified lower part and all drivers a unified upper part.

              Comment


              • #27
                Originally posted by log0 View Post

                So LLVM own optimization passes suck so much. that it is better to run let NIR optimize the code?
                The explanation that has been given repeatedly is that llvm ir is very low-level. With NIR, they have more information about the program that's being run, and can optimize things that wouldn't be possible in LLVM for that reason. For example, they can optimize across different shader stages, which is something llvm ir has no concept of.

                Comment


                • #28
                  Originally posted by smitty3268 View Post

                  The explanation that has been given repeatedly is that llvm ir is very low-level. With NIR, they have more information about the program that's being run, and can optimize things that wouldn't be possible in LLVM for that reason. For example, they can optimize across different shader stages, which is something llvm ir has no concept of.
                  Does amd isa has concept of shader stages? If not then such optimization is not needed. You just can optimize code as a whole, cant you?
                  I belive there is not other reason then: "Hey we have already SPIRV TO NIR code wirtten, lets use it."
                  Last edited by difron; 27 August 2017, 06:15 AM.

                  Comment


                  • #29
                    Originally posted by difron View Post

                    Does amd isa has concept of shader stages? If not then such optimization is not needed. You just can optimize code as a whole, cant you?
                    I belive there is not other reason then: "Hey we have already SPIRV TO NIR code wirtten, lets use it."
                    No doubt that code is already written is part of the reason they want to use it. No sense in starting something new that already exists. AMD GPU's are pipelined so is that what you mean by "stages"? The reason NIR is able to do optimizations that LLVM can't is because it preserves more information about the application running. It doesn't really have anything to do with the GPU hardware. Just the information and the format of that information.

                    Comment


                    • #30
                      Originally posted by duby229 View Post

                      No doubt that code is already written is part of the reason they want to use it. No sense in starting something new that already exists. AMD GPU's are pipelined so is that what you mean by "stages"? The reason NIR is able to do optimizations that LLVM can't is because it preserves more information about the application running. It doesn't really have anything to do with the GPU hardware. Just the information and the format of that information.
                      The question is do we need to preserve this information until hw level. If not then optimization can be done regardless this information.

                      Edit: Well, to sum up my concern. In real radeonsi driver are there any optimizations in tgsi form which would benefit from switching to nir? Or this optimization benefits are purely hypothetical?
                      Last edited by difron; 27 August 2017, 07:27 AM.

                      Comment

                      Working...
                      X