Originally posted by bridgman
View Post
Announcement
Collapse
No announcement yet.
Gallium3D / LLVMpipe With LLVM 2.8
Collapse
X
-
-
Originally posted by V!NCENT View PostOf course it does, but everything better at least _work_. If users bump into for them unacceptable performance they know they need to upgrade their PC. Simply telling them to upgrade can enrage users; "But my dad says this computer is fast! It's a ?2200 Sony laptop bought this very year!" <ultra flat Vaio netbook, albeit shipping with Windows 7, it has an onboard Intel graphics chip. And all sorts of other situations that would make your toes crawl in your shoes...
The radeon devs had a big discussion about this when figuring out how much of OpenGL 2 the r300 driver should support, and there is no good answer. Every solution has problems under certain use cases.
Comment
-
Originally posted by V!NCENT View PostOf course it does, but everything better at least _work_.
No, we shouldn't and, no, we *don't* do that. We either fallback to a simpler codepath or we show an error message and exit.
Transparent software fallbacks are useful only for vertex shader emulation (which can be done fast enough on a CPU). Everything else is better served by a meaningful error message that can be handled by your code (allocate a texture, get GL_OUT_OF_MEMORY, allocate a smaller texture), rather than a fallback that you have *no* control over (allocate a texture, fallback to 1fps software rendering with no recourse).
Fortunately, modern OpenGL drivers try to avoid fallbacks unless you explicitly instruct them otherwise. See, for instance, NVEmulate.
Comment
-
Why can't we architect an API that allows the driver to notify the application that a particular call is being done in software? This would provide the best end-user experience: the application first tries the "ideal" path; if the driver says that path is falling back to software, then the application can (at their option) attempt to do something else that might not fall back, or it can continue on with the software path, to the user's detriment.
It's just a special case of error handling then, and it can be dealt with the same way that non-fatal exceptions / errors are dealt with in whichever framework you're in. The 3d engine can do a simple scratch test of all the rendering functionality on startup to determine where the software fallback minefield is, then activate the required workarounds.
This would increase complexity for both the driver (driver developers would have to notify internal APIs of where the software fallbacks are) and the app (app developers would have to think of possible alternate paths for each potential fallback scenario), but the user would win in the end. And it would get rid of really retarded things you see out in the field, like PlaneShift's hardware presets list that attempts to grep your OpenGL vendor string, figure out what hardware you have, and special-case the rendering paths based on what it thinks your driver can do. That kind of crap is unfortunately necessary in a world where the driver does not give you any useful information as an app developer. The problem with this strategy is that the information becomes outdated almost as soon as it is released. A new chip comes out. A new driver is released with improved features, or new bugs that require a different path to be used. The Mesa devs decide to change the vendor string. The app gets confused whether you're using fglrx or the open source drivers. And on and on and on -- these scenarios crop up constantly in this hackish system.
What we need in the 3d space is something like what is mostly a solved problem in the audio space: negotiation. In gstreamer, you have caps negotiation between two elements to ensure that element A can be linked to element B if at all possible. With audio, all you need to figure out is what sample format the data has to be transmitted in. With 3d rendering, the decision points are more numerous and the variables are more complicated, but the process should be the same. In real-time 3d, though, you don't always need exactly the functionality you ask for. For instance, if some card doesn't support anisotropic filtering but it does support trilinear, it is not a fatal error to have to switch from anisotropic to trilinear. Your attentive users may notice the quality degradation, but I bet they'd rather have 45 fps with trilinear than 0.5 fps with software anisotropic. Apply the same reasoning for any other potential fallback scenario. Other domains (networking, databases) have a lot of error cases; it's only fair that real-time 3d should too.
Comment
-
What we need in the 3d space is something like what is mostly a solved problem in the audio space: negotiation.
This is one of those legacy design decisions that OpenGL is still carrying to this day and make developers' lives harder. The sad part is that OpenGL *could* have been a much cleaner API if the original 2.0 (3dlabs) or 3.0 (Long Peaks) proposals had gone through. Sometimes backwards compatibility is a heavy burden.
(*) with the exception of proxy textures that are fundamentally broken anyway
Comment
-
(I hate the edit limit)
Which brings us to this:
Why can't we architect an API that allows the driver to notify the application that a particular call is being done in software? This would provide the best end-user experience: the application first tries the "ideal" path; if the driver says that path is falling back to software, then the application can (at their option) attempt to do something else that might not fall back, or it can continue on with the software path, to the user's detriment.
I know I am going to be trolled for saying this again, but OpenGL is not a particularly good API by 2010 standards. Back in 199x it was great compared to the competition: simple, fast, with wider hardware/platform support and ambitious extensions. The problem is that the competition has moved on since then, leaving OpenGL to struggle with its long legacy.
OpenGL 2.0 would have fixed that by killing the fixed-function pipeline in favor of shaders. The ARB deemed backwards support too useful and overturned the 3dlabs proposal.
OpenGL 3.0 would have dragged the API screaming and kicking into the modern world. Khronos again deemed backwards support too useful and overturned the original proposal. It is said that Nvidia was (one of) the strongest opposers to the Long Peaks overhaul.
What we are left with is a legacy-ridden API that plays a crucial role in our software ecosystem. We cannot replace it, we cannot fix it, we have to put our feet to the ground and endure.
As someone said for C++ before, it's as if there's a simpler, cleaner API trying to come out of the mess.
/Rant
Comment
-
Originally posted by NomadDemon View Postfor me, there can be 2 opengl standards
pure HW and mixed SW/HW
In that case, a pure SW driver might actually be faster than a HW/SW combo, especially for OpenGL 3.x and beyond.
Comment
-
implementation i mean
truly? i dont care.. I just want to make it work fast, stable, and with no problems :< right now cant play even fear or CS, cos AMD crash on CS1.6, nvidia not..
fear dont even start...
and many other stuff i want to just play/do :< 5 fps isnt good result for radeon 4850 even for crysis
Comment
Comment