Originally posted by log0
View Post
Announcement
Collapse
No announcement yet.
RadeonSI May Eventually Switch To NIR Completely
Collapse
X
-
Originally posted by log0 View PostWhy are they not translating SPIRV to LLVM IR directly? Weird...
At least that is more or less the gist of it.
- Likes 1
Comment
-
Originally posted by duby229 View Post
Optimization pass. Better binary performance. Besides, the driver devs have claimed for years that translating between IR's is very close to free., it doesn't take much time at all. So it's better to take a very slight hit to compile times in order to get better runtime.
At least that is more or less the gist of it.
Comment
-
-
Originally posted by log0 View Post
I don't see any explanations, only claims that it is somehow better than using LLVM IR directly.
Comment
-
Originally posted by log0 View Post
So LLVM own optimization passes suck so much. that it is better to run let NIR optimize the code?
- Likes 2
Comment
-
Originally posted by smitty3268 View Post
The explanation that has been given repeatedly is that llvm ir is very low-level. With NIR, they have more information about the program that's being run, and can optimize things that wouldn't be possible in LLVM for that reason. For example, they can optimize across different shader stages, which is something llvm ir has no concept of.
I belive there is not other reason then: "Hey we have already SPIRV TO NIR code wirtten, lets use it."Last edited by difron; 27 August 2017, 06:15 AM.
Comment
-
Originally posted by difron View Post
Does amd isa has concept of shader stages? If not then such optimization is not needed. You just can optimize code as a whole, cant you?
I belive there is not other reason then: "Hey we have already SPIRV TO NIR code wirtten, lets use it."
Comment
-
Originally posted by duby229 View Post
No doubt that code is already written is part of the reason they want to use it. No sense in starting something new that already exists. AMD GPU's are pipelined so is that what you mean by "stages"? The reason NIR is able to do optimizations that LLVM can't is because it preserves more information about the application running. It doesn't really have anything to do with the GPU hardware. Just the information and the format of that information.
Edit: Well, to sum up my concern. In real radeonsi driver are there any optimizations in tgsi form which would benefit from switching to nir? Or this optimization benefits are purely hypothetical?Last edited by difron; 27 August 2017, 07:27 AM.
Comment
Comment