Originally posted by Prescience500
View Post
Announcement
Collapse
No announcement yet.
NIR Lands In Mesa, New IR Started By High Schooler Intern At Intel
Collapse
X
-
Originally posted by cwabbott View PostRight now, the NIR path is actually a lot worse than the GLSL IR one since we haven't implemented SIMD16 support, as well as a bunch of other stupid things. That's why it's hidden behind an environment variable. I don't think it'll take long to catch up though, and with SSA it's a lot easier to implement more powerful optimizations that run faster. Already, we have copy propagation and dead code elimination passes that are a lot more advanced than anything GLSL IR could do, and that took me less than a day to implement. The other major thing is that with the more standard design it's a lot more straightforward to implement techniques you'll find in most recent compiler papers.
GLSL -> GLSL IR (+optimizations)-> NIR (+optimizations) -> TGSI -> LLVM (+optimizations) -> ISA.
AMD invested heavily in LLVM but I guess this could end up like this:
GLSL -> NIR (+optimizations) -> LLVM(+optimizations) -> ISA.
For Intel and nouveau:
GLSL -> NIR(+optimizations) -> ISA.
Your comments appreciated.
Comment
-
Originally posted by Drago View PostKeep up the good work mate. One question, does Intel intents to replace GLSL IR with NIR completely, so GLSL compiler compiles directly to NIR. I am little nervous that one shader programs for RadeonSI has:
GLSL -> GLSL IR (+optimizations)-> NIR (+optimizations) -> TGSI -> LLVM (+optimizations) -> ISA.
AMD invested heavily in LLVM but I guess this could end up like this:
GLSL -> NIR (+optimizations) -> LLVM(+optimizations) -> ISA.
For Intel and nouveau:
GLSL -> NIR(+optimizations) -> ISA.
Your comments appreciated.
Comment
-
Originally posted by Drago View PostKeep up the good work mate. One question, does Intel intents to replace GLSL IR with NIR completely, so GLSL compiler compiles directly to NIR. I am little nervous that one shader programs for RadeonSI has:
GLSL -> GLSL IR (+optimizations)-> NIR (+optimizations) -> TGSI -> LLVM (+optimizations) -> ISA.
AMD invested heavily in LLVM but I guess this could end up like this:
GLSL -> NIR (+optimizations) -> LLVM(+optimizations) -> ISA.
For Intel and nouveau:
GLSL -> NIR(+optimizations) -> ISA.
Your comments appreciated.
I don't see why it doesn't just go, in the future, NIR -> TGSI(or LLVM) -> ISA. Or even skip the TGSI or LLVM parts, if GLSL is going to be force-compiled into NIR by Mesa (without giving the option to compile directly into TGSI or LLVM IRs).
Comment
-
Originally posted by Daktyl198 View PostCan anybody tell me a legit reason the AMD FOSS drivers have so many levels of IR while the Intel driver has like, 2-3?
Also, Why use Mesa-IR, TGSI, AND llvm?
Lastly, doesn't so many translations introduce the probability of more bugs (not to mention the time it takes to translate so many times INCLUDING optimizations).
As far as bugs, sure that is a concern - but then, scrapping that code and rewriting it to target a new IR is probably far, far, far more likely to introduce new bugs. New code always introduces problems, at least the old code is pretty well tested.
Comment
-
Radeonsi won't use NIR. We generally don't want to do any optimizations in Mesa, because the LLVM backend can do them better *and* has to do them all anyway because of how ugly the generated LLVM IR is. My current plan is to disable all Mesa shader optimizations for radeonsi and see what happens. Ideally, we should have this:
GLSL (no optimizations) -> TGSI (no optimizations) -> LLVM (this is where optimizations start)
The only thing NIR would do for us is slowing down the compilation. Our LLVM backend is good enough that it doesn't need any of it.
Comment
-
Now i am confused.
My understanding how the drivers work are like this:
OpenGL(functions) -> TSGI -> ISA
GLSL -> GLSL IR -> ISA
As the radeonsi uses LLVM for compiling GLSL IR to ISA so it should be like this:
GLSL -> GLSL IR -> LLVM -> ISA
Am I missing something?
Comment
-
Originally posted by marek View PostRadeonsi won't use NIR. We generally don't want to do any optimizations in Mesa, because the LLVM backend can do them better *and* has to do them all anyway because of how ugly the generated LLVM IR is. My current plan is to disable all Mesa shader optimizations for radeonsi and see what happens. Ideally, we should have this:
GLSL (no optimizations) -> TGSI (no optimizations) -> LLVM (this is where optimizations start)
The only thing NIR would do for us is slowing down the compilation. Our LLVM backend is good enough that it doesn't need any of it.
Comment
-
Originally posted by marek View PostRight now radeonsi works like this:
GLSL (IR) -> TGSI -> LLVM -> native
TGSI is always there.
In other words, could it be:
GLSL (IR) -> LLVM -> native(ISA)
Comment
Comment