I just hope the OSS drivers don't get stuck in a for(; loop trying to fix the infrastructure / architecture indefinitely. The extremely frequent hardware generation jumps have been making it nearly impossible for the open source 3d stack to settle on an architecture and give driver developers time to work on the more difficult optimizations. The driver developers are constantly stuck fighting a battle between four things demanding their time: time on new hardware support, time on optimizations, time spent working on the infrastructure of the next architecture, and time spent porting old hardware to the new architecture. There aren't enough developers to work on all four of those issues adequately, so you have two options: either increase the number of man-hours dedicated to OSS driver development, or, lacking that, eliminate one of the tasks demanding their time. I propose the latter. Specifically, eliminate the time spent working on new architectures. Just draw a line in the sand at Gallium3d, and stop rewriting stuff all the time. Commit to a stable API and then optimize, so people can actually get some good use out of their hardware without resorting to fglrx or Windows.
I don't think Gallium3d will be the end-all be-all of 3d architectures. Either the APIs within it will eventually change in such major ways that existing drivers will need to be practically rewritten, or some different 3d architecture by another company will crop up and take its place. This is a practical necessity because, with new GPUs come new demands on the software stack, and these simply can't be worked into the existing architecture without breaking existing code.
That's understandable, but the existence of new demanding hardware shouldn't spell the end of optimization potential for old cards with the old architecture.
Personally I think r300g is an exception to this rule: moving from classic mesa to gallium is a completely different kind of architecture shift, and one that makes a lot of sense. But when gallium starts bumping its internal API to v1.0, v2.0, v3.0 to support new hardware, optimization should continue on r300g vs gallium 0.4, unless upgrading it to work with later gallium versions is appreciably easy (which is definitely not something I think can be taken for granted).
Either an increase in salaried company manpower or a sea of new, complete, public documentation would ease concerns about there being enough time for all four of the developers' tasks to be completed efficiently, but I'm pretty sure that both improved manpower and documentation are being stonewalled indefinitely by Linux's relative insignificance.
I just think it would do a lot of good to sit down and optimize at least one of the drivers so it is competitive with Catalyst and renders correctly 99% of the time. These tasks go hand in hand; they both require a careful attention to the OpenGL implementation, with an eye towards desktop / consumer use, 3d gaming, and real-time 3d visualization apps. But these more difficult optimizations will never get invested in if the developers feel that the platform on which they write their code is about to evaporate, which is I think why OSS graphics drivers have remained so poor performance-wise for many years.
I don't think Gallium3d will be the end-all be-all of 3d architectures. Either the APIs within it will eventually change in such major ways that existing drivers will need to be practically rewritten, or some different 3d architecture by another company will crop up and take its place. This is a practical necessity because, with new GPUs come new demands on the software stack, and these simply can't be worked into the existing architecture without breaking existing code.
That's understandable, but the existence of new demanding hardware shouldn't spell the end of optimization potential for old cards with the old architecture.
Personally I think r300g is an exception to this rule: moving from classic mesa to gallium is a completely different kind of architecture shift, and one that makes a lot of sense. But when gallium starts bumping its internal API to v1.0, v2.0, v3.0 to support new hardware, optimization should continue on r300g vs gallium 0.4, unless upgrading it to work with later gallium versions is appreciably easy (which is definitely not something I think can be taken for granted).
Either an increase in salaried company manpower or a sea of new, complete, public documentation would ease concerns about there being enough time for all four of the developers' tasks to be completed efficiently, but I'm pretty sure that both improved manpower and documentation are being stonewalled indefinitely by Linux's relative insignificance.
I just think it would do a lot of good to sit down and optimize at least one of the drivers so it is competitive with Catalyst and renders correctly 99% of the time. These tasks go hand in hand; they both require a careful attention to the OpenGL implementation, with an eye towards desktop / consumer use, 3d gaming, and real-time 3d visualization apps. But these more difficult optimizations will never get invested in if the developers feel that the platform on which they write their code is about to evaporate, which is I think why OSS graphics drivers have remained so poor performance-wise for many years.
Comment