Announcement

Collapse
No announcement yet.

Trying Out RadeonSI NIR With Some OpenGL Linux Games On Mesa 18.1-dev

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Trying Out RadeonSI NIR With Some OpenGL Linux Games On Mesa 18.1-dev

    Phoronix: Trying Out RadeonSI NIR With Some OpenGL Linux Games On Mesa 18.1-dev

    With the RadeonSI NIR back-end continuing to mature with more OpenGL coverage and now supporting GLSL 4.50, I decided to run some tests of Mesa 18.1-dev Git to see the impact when enabling NIR support...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Guys, please explain, why this NIR is needed?

    Comment


    • #3
      Investing on Nir is wrong. Even MS with Shader Model 6 invests on LLVM for both D3D11-12.

      Comment


      • #4
        Originally posted by artivision View Post
        Investing on Nir is wrong. Even MS with Shader Model 6 invests on LLVM for both D3D11-12.
        The NIR driver still uses LLVM.

        It's
        GLSL -> NIR -> LLVM IR -> native hardware binary.
        instead of
        GLSL -> TGSI -> LLVM IR -> native hardware binary.

        Comment


        • #5
          Originally posted by smitty3268 View Post
          It's
          GLSL -> NIR -> LLVM IR -> native hardware binary.
          instead of
          GLSL -> TGSI -> LLVM IR -> native hardware binary.
          I think it might actually be...

          GLSL -> NIR -> LLVM IR -> native hardware binary
          instead of
          GLSL -> NIR -> TGSI -> LLVM IR -> native hardware binary

          IIRC the plan was for the GLSL compiler to use NIR internally no matter what the form of the final output.

          I'm not sure if GLSL IR (between GLSL and NIR) was ever removed - I think it was but not 100% sure.
          Last edited by bridgman; 30 January 2018, 09:52 PM.
          Test signature

          Comment


          • #6
            Originally posted by bridgman View Post

            I think it might actually be...

            GLSL -> NIR -> LLVM IR -> native hardware binary
            instead of
            GLSL -> NIR -> TGSI -> LLVM IR -> native hardware binary

            The GLSL compiler uses NIR internally no matter what the form of the final output.
            Not exactly. I think that some code passes a step and a step and a step, wile some code doesn't. For example:

            A portion does GLSL -> NIR -> LLVM_IR -> HW
            A portion does GLSL -> LLVM_IR -> HW at the same time.
            A portion does something more or less than that.

            Not all the code passes from all the steps. That is why multiple IRs have less overhead than many people think they do.

            Also your telling that Crimson uses only one compiler for everything and only AMD IL is not really believable.
            Last edited by artivision; 30 January 2018, 10:04 PM.

            Comment


            • #7
              AFAIK all the code for Mesa OpenGL goes through the full path. Multiple IRs have less overhead than many people think because converting from one to t'other is usually very cheap.

              Not sure how Crimson fits - topic was NIR which is only used in Mesa GL. Don't think I mentioned AMDIL either, did I ?
              Test signature

              Comment


              • #8
                NIR is required for SPIR-V and OpenGL 4.6. It's not optional.

                Comment


                • #9
                  Originally posted by marek View Post
                  NIR is required for SPIR-V and OpenGL 4.6. It's not optional.
                  That's definitely not true. It's just the easiest way forward and has the benefit of standardizing some code in common with Intel.

                  You could use a SPIR-V -> LLVM IR pass directly, for example, or write new code to translate SPIR-V into TGSI, even if that would require extending TGSI further.

                  Comment


                  • #10
                    Originally posted by smitty3268 View Post

                    That's definitely not true. It's just the easiest way forward and has the benefit of standardizing some code in common with Intel.

                    You could use a SPIR-V -> LLVM IR pass directly, for example, or write new code to translate SPIR-V into TGSI, even if that would require extending TGSI further.
                    So in other words, and as Marek said NIR is not optional

                    TGSI is a mess to work with I don't think anybody in their right mind would attempt to write a SPIR-V -> TGSI pass, and there is already shareable SPIR-V -> NIR -> LLVM IR code so it makes no sense to ignore that and do SPIR-V -> LLVM IR. There are other advantages to using NIR such as cross shader optimisations which are not possible with LLVM, I also doubt LLVM would do a very good job of optimising the raw shaders it depends quite heavily on the GLSL IR / NIR optimisation passes in order to produce good output.

                    Comment

                    Working...
                    X