Originally posted by Pfanne
View Post
Announcement
Collapse
No announcement yet.
OpenVG State Tracker For Gallium3D Tomorrow
Collapse
X
-
-
Originally posted by bridgman View PostI guess it depends mostly on how much video memory you have relative to the sum of all the buffer requirements for the different apps and state trackers. The memory manager can flip things between video and system memory but you take a performance hit if that happens much. I'm mentioning that more than the obvious GPU load because an overloaded GPU will slow down predictably (2 tasks run half as fast or better) but if you start thrashing video memory then you can quickly get a larger drop in performance.
thanks for the answer!
Leave a comment:
-
Originally posted by Craig73 View PostThanks for clarifying (to you both). What's not clear to me is how much of this optimization is still available under Gallium3d without breaking the generalization too much; although I would presume there would be other optimizations available (new state trackers, state tracker tweaking, refactoring/redesign of the gallium generalizations, LLVM optimization improvements...likely tonnes to do yet)
...that being said, that is also being too narrow a focus. After the 80% is achieved, I expect that developer resources will get more low hanging fruit with more state trackers or other areas of the graphic stack.... perhaps an XOrg release for Michael ;-)
For simpler workloads (where the GPU is not shader-limited) I think the open source drivers will get a lot closer to 100%. Strictly speaking you probably don't need Gallium for that but I expect it will help.
Originally posted by Pfanne View Postdoes anyone actually know how good all these statetrackers work alongside eachother?
for example im playing a game wich uses the direct3d statetracker(if this is ever going to happen) + compiz which is using the openglstate tracker while having physics effects in this game accelerated with opencl...
will all this work well together, or will you notice a performance hit, that is higher than each tracker for itself?Last edited by bridgman; 01 May 2009, 05:33 PM.
Leave a comment:
-
does anyone actually know how good all these statetrackers work alongside eachother?
for example im playing a game wich uses the direct3d statetracker(if this is ever going to happen) + compiz which is using the openglstate tracker while having physics effects in this game accelerated with opencl...
will all this work well together, or will you notice a performance hit, that is higher than each tracker for itself?
Leave a comment:
-
Originally posted by MostAwesomeDude View PostClose. The upper tiers of performance require a lot of expensive generalized optimizations. If an application runs slowly, we profile it, look at what parts are slow, and optimize those parts in the driver. As a bonus, other applications get faster too. Sometimes this is stuff like adding in support for new OGL extensions; sometimes it's things like redoing math routines in assembly. Whatever gets us more speed by eliminating bottlenecks, really.
...that being said, that is also being too narrow a focus. After the 80% is achieved, I expect that developer resources will get more low hanging fruit with more state trackers or other areas of the graphic stack.... perhaps an XOrg release for Michael ;-)
Leave a comment:
-
Originally posted by drag View PostClose. The last 20-30% in performance requires a lot of _application_specific_ tweaks.
Leave a comment:
-
Some comments by driver developers suggest that we should be able to create a high performance driver for Linux/BSD/etc. hitting about 80% (can't remember the number exactly) of the performance of proprietary drivers, generally much better than what is available today. Their reasoning was that the last 20% requires a LOT of hardware specific tweaking and tuning, and that LLVM currently doesn't optimize super scalar architectures well enough.
Close. The last 20-30% in performance requires a lot of _application_specific_ tweaks.
That is people buying video cards are generally looking for good performance in specific areas.. like some people want very good performance for Maya for doing 3D editing. Or, for marketting reasions, the ATI or Nvidia folks want to have the best Quake4 performance possible.
Stuff like that.
Of course hardware tweaks are very important... But even if OSS drivers reach the same level of hardware support sophistication as proprietary drivers (which isn't going to happen very soon) they still won't look good in benchmarks.
Linux developers tend to shy away from application-specific stuff.. It's a layers violation to make low level behavior specific to certian high-level applications.. which means that you end up with multiple code paths and thus are much more likely to run into bugs and have big maintainance issues. Plus OSS folks just don't have the resources to go through applications one by one and hack support for specific apps into the drivers.
Maybe the OSS folks are hoping that a long-term solution would be to take advantage of JIT engines (like what LLVM can support) and such to make the drivers self-optimizing. That is if you doing benchmarks or something like that the 2 or third pass will be faster then the first. But that is some serious, serious computer science voodoo, so I wouldn't expect that to very effective any time in the next few years.. If Linux gains acceptance as a OpenCL computing platform that would probably help out a _lot_ since you'd have lots of commercial interest in GPU optimizations. Of course, Stability and bug fixing will come first and that is going to take a while in itself.
Of course if your using your own 3D stuff or playing more indie or Open Source/Free software games there isn't much commercial interest from folks like Nvidia or ATI for their proprietary drivers. So for that sort of stuff the OSS drivers may actually end up being competitive.Last edited by drag; 01 May 2009, 12:03 PM.
Leave a comment:
-
Originally posted by Pfanne View Postwell thats nice
having all that accelerated would be useful ^^
Originally posted by mendieta View PostGuys, this is all cool and it looks like the next generation open source linux graphics will probably bit the crap out of the proprietary world.
From my understanding (I am not a driver developer), Gallium3d will allow us to be more competitive with good proprietary drivers and make development and maintenance more efficient.
Gallium3d (as already described) is a video card driver modeled on a generic representation of a modern 3d graphics card. The driver is constructed into distinct layers to separate the hardware, OS, and graphics API from each other. This means a developer primarily has to focus on creating the hardware specific layer, a relatively small portion of a modern graphics driver.
This means a new graphics card can get a complete driver with relatively little effort, especially for a vendor who doesn't have an existing Linux driver. It also means that adding a new API (say - OpenGL3) to all existing drivers only has to be done once.
Making something more generic and possibly adding more layers does slow things down a little bit and/or removes options for optimizing it. That being said, the savings in effort of sharing and generalization of all this code should mean stable feature complete drivers are created faster for open systems Linux/BSD/etc.. And by using LLVM to optimize the outputted instructions from the driver should help accelerate rendering considerably, more than making up for the extra layers.
Now does this blow the pants off of proprietary solutions?
Some comments by driver developers suggest that we should be able to create a high performance driver for Linux/BSD/etc. hitting about 80% (can't remember the number exactly) of the performance of proprietary drivers, generally much better than what is available today. Their reasoning was that the last 20% requires a LOT of hardware specific tweaking and tuning, and that LLVM currently doesn't optimize super scalar architectures well enough.
I'm not a driver developer, so I can't comment on the limits of the design choices... but it seems relatively obvious that the trade off of that initial 20% is more than worth it for the better Linux/BSD drivers overall, greater development efficiencies, and more developer time to tweak drivers or to improve other areas of the graphic stack.
... and all the excitement around GPU processing will mean lots of focus on performance from a broader set of developers so things like LLVM super-scalar optimizations shouldn't be that far behind.
It remains to be seen whether open source or proprietary will be king of the FPS scores in the end... but it will be exciting to watch.
Exciting times...
Leave a comment:
Leave a comment: