It'll be good, and maybe even 90% good but I don't forsee Radeon being 100% as good as Catalyst in all cases. Like I said above, the catalyst devs have even said themselvs that Catalyst has tons of hand-tuning done for various cards and generations, things that would make the mesa code nothing but a tangle of IFDEFS and make the code unmaintainable in the open source fashion.
Originally Posted by edoantonioco
I think Bridgman originally said the target was 80% of Catalyst, but that was without a few optimizations (like the SB shader backend for example) and some of the other 'more complicated' work that no one was actually expected to do but someone ended up doing it. So we very well may hit 90% or maybe even 98% of the catalyst code. But EXACT same Catalyst performance, across all cards, in all use-cases...? I doubt it. Its just not feasible without starting hand-tuning everything..
Wow, the 6570 seems to be unexpectedly slow. The 5670 isn't tested but I guess I might be still happy with my choice back at the time to take the 5670 instead a 6xxx series since support was better in the free drivers at that time.
Though I thought the 6570 falls in the mainstream category and iirc. some of the devs said these would be easiest to handle. The small GPUs need lots of attention and care to work with a fair performance, the mainstream is nice but the high ends need again attention since the CPU and others might already be a bottleneck so you can't use their full portential.
But besides the little cards and the exception all of them show very playable framerates and that is a good thing.
The original estimate I mentioned was 60-70%, but that wasn't a *target* -- just an estimate from our architects of the performance we could expect from a "vanilla" implementation with a simple shader compiler/translator. It was more a reflection of the amount of developer effort expected to be available. As Ericg said, the SB backend and the use of LLVM have already invalidated some of those assumptions so performance limit is really a matter of developer time and keeping the code clean.
Originally Posted by Ericg
Unfortunately optimization is one of those "exponentially expensive" things, where each incremental improvement tends to be both smaller and more time-consuming than the last one.
The primary "advantage" Catalyst has is that significant chunks of the code are shared across most of the PC market (ie not just Linux) and therefore can justify more developer time, with the caveat that the resulting code usually needs to be kept closed because of requirements from the other OSes it supports.
If the game offers the option to disable MSAA in its own menus, that option will work. It's only driver-level forcing that is not complete.
Wasn't the reason gallium was so cool that it was very portable?
Originally Posted by bridgman
"Gallium drivers have no OS-specific code (OS-specific code goes into the "winsys/screen" modules) so they're portable to Linux, Windows and other operating systems."
"Gallium3D is designed to support multiple API's, multiple GPU's, and Multiple OS and windowing systems."
And here are sometimes threads about gallium3d being used in proprietary drivers, most recently http://phoronix.com/forums/showthrea...ietary-drivers
So since mesa is MIT licensed, significant chunks of the open source driver could theoretically be used on most of the PC market (ie not just linux) too, right?
Yes, if we wanted to "start over" and discard most of a $20-50M investment (I don't know the exact numbers) in the current driver... but where is the justification for doing that ? Going with gallium3d, for example, would just mean replacing our existing hardware layer (hwl) with a different one a bit further down the stack. Having the hwl further down the stack does make for a smaller and more maintainable hwl driver and allows relatively more of the upper level code to be common, but also tends to make performance tuning more difficult once you get out to the edges.
Originally Posted by ChrisXY
If we were writing new drivers from scratch (which AFAIK was the scenario VMware faced when writing gallium3d-based drivers for their emulated SVGA hardware) *and* were looking for a good compromise between maintainability and performance then using gallium3d would make a lot of sense. In the case of both AMD and NVidia, however, where workstation and gaming markets both require "every last scrap of performance" to win sales, going with something more specialized is usually the only option.
The key point though is that it's not "lack of use of open source code" that keeps the closed driver closed, it's the need to design it around non-Linux operating systems. We could make a closed-source driver using some open source code but it would still have to be closed source.
Last edited by bridgman; 09-24-2013 at 09:50 AM.
My point was that in all of the Phoronix benchmarks 60fps@1920x1080 is attainable with the open source drivers. Arguing that 250fps isn't acceptable because the hardware can theoretically do 280fps is really irrelevant when your monitor can't display that and your game is vsyned to 60fps (yes, I know the profesional gamers prefer no vsync for lowest latency, but 99% of people don't because it results in frame tearing) . If you need more than 60fps@1920x1080, then either you game at higher resolution than 1920x1080 (probably multi-screen, perhaps SLI), or you have a 120hz monitor, or the Phoronix benchmarks are not challenging enough to represent your game+graphics settings (in which case you need to run your own benchmarks).
Originally Posted by Vim_User
If open source drivers get 85% of the fps of closed drivers, it only impacts your gaming if your card would otherwise be running at 60-70fps with closed drivers, because that means the open source driver is going to fall below 60fps whereas closed drivers wouldn't. But if you would get less than 60fps with the closed driver, then you need a better card anyway. At 80%+ performance, there's a very small number of game/card/resolution configurations where you're going to exceed the 60fps threshold with a closed driver, but not with the open one.
So you can't just consider average fps - the real metric is whether avg or min fps exceeds the hardware limitation of the monitor for the particular game+settings that you are using. The monitor will never display more than a fixed fps (usually 60). Excess frames are never even transferred to the monitor. You can't see them.
(Incidentally, I'm using 60fps as the upper limit because that is the max refresh rate of the majority of monitors. In reality, 30fps is acceptable for most people - most PS3 and Xbox 360 games are 30fps, including the $1billion GTA5.)
recentl byed XFX 6570.. it was crap..
subj.. decided to replace my old 4xxx series amd gpu to newer.. Here you go: ubuntu didn't support this card from box, proprietary drivers wasn't ready either.. testing icon, all sorts of overheating/artifacts despite huge radiator + 3xadditional system coolers. Switched back to old 4xxx card..
Some black voodoo magic in area of AMD/nvidia products + recent linux kernels in field. I hope i can fix all issues with system, still i can't say i would recommend amd products + linux as stable system currently to anyone for now. Having all kinds of issues with amd hw ranged from newest amd fm2 gpu-cpu hw to older systems on modern linux software(actually fm2 have hw issues on ms systems too).
Well the real question is better for them to cut costs and share risks or to get competitive advantage Openstack participants think otherwise.
Originally Posted by Asariati
In defence industry yes in industry where competitive advantage should matter much more risk and cost sharing is really common (Russia is cooperating with France and Italy)
Business are to earn money by any means necessary for the shareholders. Nothing else.
Well the problem is that closed source drivers in Linux just suck. Try to install them in Suse Tumbleweed and youll get the picture. Not to mention other problems