Announcement

Collapse
No announcement yet.

NIR Lands In Mesa, New IR Started By High Schooler Intern At Intel

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Prescience500 View Post
    Good to know that all the drivers will be able to benefit. I can't wait to see performance comparison benchmarks. I suppose time will tell if the unified graphics driver will allow Catalyst to benefit or if it'll just help mesa reach parity faster.
    I think you're misunderstanding something here. NIR lives in Mesa, the open source userspace driver. Shared between mesa and Catalyst will only be the kernel driver, which has nothing to do with NIR.

    Comment


    • #12
      Originally posted by cwabbott View Post
      Right now, the NIR path is actually a lot worse than the GLSL IR one since we haven't implemented SIMD16 support, as well as a bunch of other stupid things. That's why it's hidden behind an environment variable. I don't think it'll take long to catch up though, and with SSA it's a lot easier to implement more powerful optimizations that run faster. Already, we have copy propagation and dead code elimination passes that are a lot more advanced than anything GLSL IR could do, and that took me less than a day to implement. The other major thing is that with the more standard design it's a lot more straightforward to implement techniques you'll find in most recent compiler papers.
      Keep up the good work mate. One question, does Intel intents to replace GLSL IR with NIR completely, so GLSL compiler compiles directly to NIR. I am little nervous that one shader programs for RadeonSI has:
      GLSL -> GLSL IR (+optimizations)-> NIR (+optimizations) -> TGSI -> LLVM (+optimizations) -> ISA.
      AMD invested heavily in LLVM but I guess this could end up like this:
      GLSL -> NIR (+optimizations) -> LLVM(+optimizations) -> ISA.
      For Intel and nouveau:
      GLSL -> NIR(+optimizations) -> ISA.

      Your comments appreciated.

      Comment


      • #13
        Originally posted by Drago View Post
        Keep up the good work mate. One question, does Intel intents to replace GLSL IR with NIR completely, so GLSL compiler compiles directly to NIR. I am little nervous that one shader programs for RadeonSI has:
        GLSL -> GLSL IR (+optimizations)-> NIR (+optimizations) -> TGSI -> LLVM (+optimizations) -> ISA.
        AMD invested heavily in LLVM but I guess this could end up like this:
        GLSL -> NIR (+optimizations) -> LLVM(+optimizations) -> ISA.
        For Intel and nouveau:
        GLSL -> NIR(+optimizations) -> ISA.

        Your comments appreciated.
        Initially, I thought we might do that eventually, but there are so many higher-level things that are done in GLSL IR like builtin function handling, constant folding required by the frontend, switch statement lowering, etc. that would be quite a pain to re-do in NIR. So I think the plan is to do a lot more lowering in NIR as well as linking and optimization, but to leave GLSL IR in place as a higher-level language. That being said, that's quite a ways out and very theoretical -- one step at a time. Don't be too worried about the proliferation of IR's handling the code, though -- the translation part is usually the least expensive when it comes to compiler internals. Yes, it's a mess, but it's something we have to deal with.

        Comment


        • #14
          Originally posted by Drago View Post
          Keep up the good work mate. One question, does Intel intents to replace GLSL IR with NIR completely, so GLSL compiler compiles directly to NIR. I am little nervous that one shader programs for RadeonSI has:
          GLSL -> GLSL IR (+optimizations)-> NIR (+optimizations) -> TGSI -> LLVM (+optimizations) -> ISA.
          AMD invested heavily in LLVM but I guess this could end up like this:
          GLSL -> NIR (+optimizations) -> LLVM(+optimizations) -> ISA.
          For Intel and nouveau:
          GLSL -> NIR(+optimizations) -> ISA.

          Your comments appreciated.
          Can anybody tell me a legit reason the AMD FOSS drivers have so many levels of IR while the Intel driver has like, 2-3? Also, Why use Mesa-IR, TGSI, AND llvm? Lastly, doesn't so many translations introduce the probability of more bugs (not to mention the time it takes to translate so many times INCLUDING optimizations).

          I don't see why it doesn't just go, in the future, NIR -> TGSI(or LLVM) -> ISA. Or even skip the TGSI or LLVM parts, if GLSL is going to be force-compiled into NIR by Mesa (without giving the option to compile directly into TGSI or LLVM IRs).

          Comment


          • #15
            Originally posted by Daktyl198 View Post
            Can anybody tell me a legit reason the AMD FOSS drivers have so many levels of IR while the Intel driver has like, 2-3?
            Backwards compatibility, so they don't have to rewrite the driver each time a new one comes out. They don't have the manpower to waste dev time on stuff like that.

            Also, Why use Mesa-IR, TGSI, AND llvm?
            Nobody is using Mesa IR anymore (except the old classic drivers which no one touches). TGSI = Gallium, and LLVM was something AMD wanted to invest in to see if it could give them good results while not spending their development time writing their own compiler.

            Lastly, doesn't so many translations introduce the probability of more bugs (not to mention the time it takes to translate so many times INCLUDING optimizations).
            Translations don't take any time at all. All of them together probably take about the same amount of time as a single optimization pass.

            As far as bugs, sure that is a concern - but then, scrapping that code and rewriting it to target a new IR is probably far, far, far more likely to introduce new bugs. New code always introduces problems, at least the old code is pretty well tested.

            Comment


            • #16
              Radeonsi won't use NIR. We generally don't want to do any optimizations in Mesa, because the LLVM backend can do them better *and* has to do them all anyway because of how ugly the generated LLVM IR is. My current plan is to disable all Mesa shader optimizations for radeonsi and see what happens. Ideally, we should have this:
              GLSL (no optimizations) -> TGSI (no optimizations) -> LLVM (this is where optimizations start)

              The only thing NIR would do for us is slowing down the compilation. Our LLVM backend is good enough that it doesn't need any of it.

              Comment


              • #17
                Now i am confused.
                My understanding how the drivers work are like this:

                OpenGL(functions) -> TSGI -> ISA
                GLSL -> GLSL IR -> ISA

                As the radeonsi uses LLVM for compiling GLSL IR to ISA so it should be like this:
                GLSL -> GLSL IR -> LLVM -> ISA

                Am I missing something?

                Comment


                • #18
                  Originally posted by marek View Post
                  Radeonsi won't use NIR. We generally don't want to do any optimizations in Mesa, because the LLVM backend can do them better *and* has to do them all anyway because of how ugly the generated LLVM IR is. My current plan is to disable all Mesa shader optimizations for radeonsi and see what happens. Ideally, we should have this:
                  GLSL (no optimizations) -> TGSI (no optimizations) -> LLVM (this is where optimizations start)

                  The only thing NIR would do for us is slowing down the compilation. Our LLVM backend is good enough that it doesn't need any of it.
                  Marek, why is TGSI needed then? Can you say that TGSI and LLVM are interchangeable, and if every other user is willing to spend time and money to convert from TGSI to LLVM, and suffer the pain to integrate in the LLVM tree, then TGSI can be left to die. Currently official users of TGSI are radeon,nouveau,freedreno,vc4, and vmware. Any Idea why Intel resist LLVM and introduce another IR, where they can do optimizations?

                  Comment


                  • #19
                    Right now radeonsi works like this:

                    GLSL (IR) -> TGSI -> LLVM -> native

                    TGSI is always there.

                    Comment


                    • #20
                      Originally posted by marek View Post
                      Right now radeonsi works like this:

                      GLSL (IR) -> TGSI -> LLVM -> native

                      TGSI is always there.
                      Yes, yes, I know, but what is the need for two intermediate formats? TGSI is just needed for legacy/other GPU vendors, or it has some other functions that can't be covered by LLVM?
                      In other words, could it be:
                      GLSL (IR) -> LLVM -> native(ISA)

                      Comment

                      Working...
                      X