Intel To Improve "Out Of The Box" Linux Gaming
Ian Romanick of Intel has laid out plans for improving the automatic configuration of game quality and performance settings under Linux in an effort to improve the out-of-the-box experience for drivers and graphics cards on the open-source operating system.
Last weekend at FOSDEM 2013 in Brussels, Ian Romanick expressed plans to enhance the "out of the box" / automatic configuration experience for games on Linux. He wrote in his talk's abstract, "Every game for desktop PCs has the ability for the user to tune quality and performance settings. However, for the out-of-the-box experience, these games also need to detect the hardware installed in the user's system to select the initial settings. On Windows and Mac, there are a number of system interfaces provided for this purpose, but on Linux it is surprisingly difficult. This talk will cover some current best practices used by shipping Linux games. It will also introduce some interfaces under development to improve the current state of affairs."
His ongoing work is to basically allow better defaults when launching games on Linux for settings like shader quality, texture sizes, etc. He's been talking with several game developers -- including Valve -- and the current interfaces for determining core system attributes is a mess.
In his communication with various studios, the common ways game developers are having to resort to determining system attributes relevant to them (such as the graphics card being used) is by using libpci (the Linux PCI library) and going through the vendor and device IDs of VGA adapters, looking at the OpenGL strings (GL_RENDERER / GL_VENDOR / GL_VERSION), and some different OpenGL extensions that sometimes work (such as GL_ATI_meminfo).
These current methods basically fail as they are a mess to implement, don't always work in case of multi-GPU systems (depending upon what GPU is active, integrated vs. discrete, GPU offloading, etc) and the reported strings aren't always stable. Each vendor is basically reporting information differently, creating an OpenGL context first is necessary for obtaining some of this information, and "probing PCI information using the external library just sucks."
Aside from game studios, this lack of standardized hardware / driver information reporting under Linux is something that's been a personal gripe of mine for years. Some of my complaints were expressed in Phoronix, Benchmarking At XDC2012 and Verbose GPU, CPU Information Under Linux. Aside from just finding out the active GPU and the OpenGL version in a standardized way, it's also far from being standardized for having a way to find out the GPU clock frequencies and other system vitals that are useful for performance monitoring, measurements, and debugging.
Within the Phoronix Test Suite and its Phodevi (Phoronix Device Interface) library it's generally come down to needing to implement separate code-paths for each of the open and closed-source drivers or even multiple code-paths for a single driver when they break their debugfs interfaces or other reported data between kernel / driver releases. Sadly, there's been little improvements to date in terms of standardized reporting under Linux.
In terms of the information desired by game developers, Ian Romanick is hoping to have some solution devised within Mesa by "this time next year." He's still seeking feedback and suggestions from various game developers and considering other solicitations. This would basically be a yet-to-be-developed feature for Mesa 9.2 / Mesa 10.0. Hopefully the settled for solution will also have been implemented by the proprietary AMD and NVIDIA drivers on Linux too...
Embedded below is Romanick's FOSDEM talk for those wanting more details. Also see the other Phoronix coverage of this annual European open-source event filled with lots of great technical talks.
Last weekend at FOSDEM 2013 in Brussels, Ian Romanick expressed plans to enhance the "out of the box" / automatic configuration experience for games on Linux. He wrote in his talk's abstract, "Every game for desktop PCs has the ability for the user to tune quality and performance settings. However, for the out-of-the-box experience, these games also need to detect the hardware installed in the user's system to select the initial settings. On Windows and Mac, there are a number of system interfaces provided for this purpose, but on Linux it is surprisingly difficult. This talk will cover some current best practices used by shipping Linux games. It will also introduce some interfaces under development to improve the current state of affairs."
His ongoing work is to basically allow better defaults when launching games on Linux for settings like shader quality, texture sizes, etc. He's been talking with several game developers -- including Valve -- and the current interfaces for determining core system attributes is a mess.
In his communication with various studios, the common ways game developers are having to resort to determining system attributes relevant to them (such as the graphics card being used) is by using libpci (the Linux PCI library) and going through the vendor and device IDs of VGA adapters, looking at the OpenGL strings (GL_RENDERER / GL_VENDOR / GL_VERSION), and some different OpenGL extensions that sometimes work (such as GL_ATI_meminfo).
These current methods basically fail as they are a mess to implement, don't always work in case of multi-GPU systems (depending upon what GPU is active, integrated vs. discrete, GPU offloading, etc) and the reported strings aren't always stable. Each vendor is basically reporting information differently, creating an OpenGL context first is necessary for obtaining some of this information, and "probing PCI information using the external library just sucks."
Aside from game studios, this lack of standardized hardware / driver information reporting under Linux is something that's been a personal gripe of mine for years. Some of my complaints were expressed in Phoronix, Benchmarking At XDC2012 and Verbose GPU, CPU Information Under Linux. Aside from just finding out the active GPU and the OpenGL version in a standardized way, it's also far from being standardized for having a way to find out the GPU clock frequencies and other system vitals that are useful for performance monitoring, measurements, and debugging.
Within the Phoronix Test Suite and its Phodevi (Phoronix Device Interface) library it's generally come down to needing to implement separate code-paths for each of the open and closed-source drivers or even multiple code-paths for a single driver when they break their debugfs interfaces or other reported data between kernel / driver releases. Sadly, there's been little improvements to date in terms of standardized reporting under Linux.
In terms of the information desired by game developers, Ian Romanick is hoping to have some solution devised within Mesa by "this time next year." He's still seeking feedback and suggestions from various game developers and considering other solicitations. This would basically be a yet-to-be-developed feature for Mesa 9.2 / Mesa 10.0. Hopefully the settled for solution will also have been implemented by the proprietary AMD and NVIDIA drivers on Linux too...
Embedded below is Romanick's FOSDEM talk for those wanting more details. Also see the other Phoronix coverage of this annual European open-source event filled with lots of great technical talks.
Add A Comment