i think the worst idea would be to have multiple ir's. its just asking for one ir to be well developed (the graphics one of course) and the other ir to be lagging behind. im not too sure which ir would be best, i like llvm as a compiler and i like the idea of one all powerful solution used all through your system. but the drivers have all been written for tgsi already, and as people have mentioned it may not be capable of describing graphics operations efficiently.
i think it may just be best to extend tgsi, it was designed with the intent of it being easy for the gpu vendors to easily port their drivers to interface with tgsi, but thats obviously a pipe dream. glsl could be nice since its designed to represent the glsl code well, but then im not sure if that represents the actual hardware capabilities well or if that represents other functionalities of the devices like compute or video decode.
there is a lot of stuff this ir will need to be used for, obviously graphics, compute, video decode (which is basically using compute for decode and graphics for display).
also, probably the best way to do remote desktop and accelerated virtual machine desktops is to use a gallium based driver model that passes the ir directly.
my personal favorite option is to make everything compile to a tgsi like ir that takes into consideration it may be passed through a network layer, basically just an easily streamable ir that is capable of describing everything graphics wise. then the driver backends convert that into the native gpu code and anything that cant be done on the gpu gets conveted to llvm to be done by the cpu.
i of course dont know if that last part is fully doable. but its kinda done already in a small sense with the i915g drivers. (since they dont have a vertex shader part there is break out in gallium that allows vertex shaders to be sent to a software driver.
i think it may just be best to extend tgsi, it was designed with the intent of it being easy for the gpu vendors to easily port their drivers to interface with tgsi, but thats obviously a pipe dream. glsl could be nice since its designed to represent the glsl code well, but then im not sure if that represents the actual hardware capabilities well or if that represents other functionalities of the devices like compute or video decode.
there is a lot of stuff this ir will need to be used for, obviously graphics, compute, video decode (which is basically using compute for decode and graphics for display).
also, probably the best way to do remote desktop and accelerated virtual machine desktops is to use a gallium based driver model that passes the ir directly.
my personal favorite option is to make everything compile to a tgsi like ir that takes into consideration it may be passed through a network layer, basically just an easily streamable ir that is capable of describing everything graphics wise. then the driver backends convert that into the native gpu code and anything that cant be done on the gpu gets conveted to llvm to be done by the cpu.
i of course dont know if that last part is fully doable. but its kinda done already in a small sense with the i915g drivers. (since they dont have a vertex shader part there is break out in gallium that allows vertex shaders to be sent to a software driver.
Comment