I don't know how you do function tests with partly unstable drivers? Like for example nouveau, when i try the dx10+ cards then these work fine. dx9 however have got no problem with basic kde effects, but as soon as i try a game (with kernel .39) then the driver will crash. So how does your function check look like in that case? Test something and see if there is kernel oops? Something similar you find with other drivers certainly too, on some days r600g git was extremely unstable too... The best way is to know first that the driver works, therefore whitelisting is not that bad. Of course using an environment var to disable the opengl checks is a bit complicated. At least when you don't know it - usually you can not see in any gui use this var to override those settings. With kde 4.4.5 there was at least a menu option that disabled those checks. In case of bad drivers (which did not fully crash the system) you could at least press alt+shift+f12 to get from a black screen to 2d desktop. Of course if your system crashes after login you have to find the configfile to disable the setting, also not that simple... In a perfect (driver) world everything would just work and be stable as advertised, but in reality this it not always the case.
So how does your function check look like in that case? Test something and see if there is kernel oops?
That was why I said "discussed with the driver devs to determine what should work". The challenge was dealing with driver code that was still under development and that's why working out the functional tests with the driver devs was recommended.
Originally Posted by Kano
Something similar you find with other drivers certainly too, on some days r600g git was extremely unstable too...
Not sure I understand. We're not talking about something that will make the apps immune to day-by-day regressions, just something that will determine if the app should try to use GL 2.x on that *released* version.
Originally Posted by Kano
In a perfect (driver) world everything would just work and be stable as advertised, but in reality this it not always the case.
Especially when the functionality being used was still under development. The primary recommendation was not to automatically use the code but instead to make the GL 2.x code paths an easily selectable option.
It depends, when you look at debian i would not say that sid is the best branch. It is ok, when you are experienced and you can handle the things which usually happen. But is a pain to support lots of others which ran in into problems. I did that and it was no fun at the end. Also debian got maybe a bit outdated too due to the long freeze. I do not use arch (or gentoo) however, but i prefer the way of selected backports. I have got no problem when there would be a kde 4.x backport repo. It just has to install without problems. It is basically a good thing when you have got a stable system you can base your updates on. Of course not every package will be the latest one possible, but is that really needed? I had to patch a handfull of lines to compile nouveau with xserver 1.7. I get most likely the same speed as when you run 1.10, so what did you gain?
It isn't necessary for a rolling release distro to have bleeding edge versions of every piece of software. Slackware being the prime example of this. You are correct that frequently updated backports are another way to achieve the goal of getting important fixes to users on a timely basis.
No he hasn't, and he's said he won't do so in a stable KDE release because of the possibility of regressions. So he's waiting for KDE 4.7.
Ubuntu has fixed the issue by patching Mesa to add GEM back into the intel driver string. Apparently the Fedora guys want an actual fix, so it's not yet clear if they will create the patch themselves or if KDE will make one for 4.7 and Fedora will just backport it.
Ah. Well those are significant bits of information, and which obviously change my perspective somewhat.
The point is they test drivers if they work with gnome, but it seems they don't do the same when comes to KDE. KWin developer at least tests some drivers on different platforms (not that much, though) thus he's able to make some blacklists.
And this is not true. If he would care only about nvidia there won't be any white and blacklists.
I'm not able to change a thing and devs are free to do what they want. I simply said what I think. If os drivers will be crashing in KDE I'll simply switch.
I was using xf86-video-ati and KDE together for a year or two without incident. My main problem with the open source drivers is lack of game support (and I believe this mostly due to upstream mesa limitations, not the drivers themselves). That, combined with the difficulties in running the AMD/ATI proprietary driver on non-Ubuntu distros, make an Nvidia GPU plus their proprietary driver the only viable alternative for me.
Since they had already provided KWin notice that they were doing it wrong, why do they have to stay up to date on all of the downstream code and follow up to make sure that KWin was really updated? You seem to think Mesa devs have nothing better to do so they might as well follow up on every project that might use their libraries. Mesa served notice that KWin was doing it wrong; at that point KWin should've fixed their code OR asked for clarification. Neither happened.
People have a hard enough time keeping up with their own code to have to consider how people downstream might be misusing their code.
However, it was flaws in the mesa drivers which forced Kwin to do the "wrong" thing.
The drivers are supposed to report what features they support, and Kwin tried to do the right thing by querying the drivers. However some drivers crashed when queried. Other drivers would claim to support features, then crash when those features were used. To work around these mesa flaws Kwin instead used the drived identification strings. The mesa devs complained about this, but didn't offer Kwin any workable alternatives.