Announcement

Collapse
No announcement yet.

Gallium3D / LLVMpipe With LLVM 2.8

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by bridgman View Post
    V!ncent, the issue here is that many applications include code paths for different levels of GL support, so in most cases exposing a lower level of GL support that is fully HW accelerated will give a better user experience than exposing the higher level of GL support but with a few software fallbacks.
    Of course it does, but everything better at least _work_. If users bump into for them unacceptable performance they know they need to upgrade their PC. Simply telling them to upgrade can enrage users; "But my dad says this computer is fast! It's a ?2200 Sony laptop bought this very year!" <ultra flat Vaio netbook, albeit shipping with Windows 7, it has an onboard Intel graphics chip. And all sorts of other situations that would make your toes crawl in your shoes...

    Comment


    • #22
      Originally posted by V!NCENT View Post
      Of course it does, but everything better at least _work_. If users bump into for them unacceptable performance they know they need to upgrade their PC. Simply telling them to upgrade can enrage users; "But my dad says this computer is fast! It's a ?2200 Sony laptop bought this very year!" <ultra flat Vaio netbook, albeit shipping with Windows 7, it has an onboard Intel graphics chip. And all sorts of other situations that would make your toes crawl in your shoes...
      It works without the software fallback as well, if the program is smart enough to check and go through an alternative codepath. With the software fallback present there is no good way of detecting this.

      The radeon devs had a big discussion about this when figuring out how much of OpenGL 2 the r300 driver should support, and there is no good answer. Every solution has problems under certain use cases.

      Comment


      • #23
        Originally posted by V!NCENT View Post
        Of course it does, but everything better at least _work_.
        Think a moment what you are suggesting. Should we emulate OpenGL 4.1 on GPUs that only supports 2.1? Expose geometry shaders even if the hardware lacks them? Should we emulate SSE3 on CPUs that only support MMX so that "everything at least _works_?"

        No, we shouldn't and, no, we *don't* do that. We either fallback to a simpler codepath or we show an error message and exit.

        Transparent software fallbacks are useful only for vertex shader emulation (which can be done fast enough on a CPU). Everything else is better served by a meaningful error message that can be handled by your code (allocate a texture, get GL_OUT_OF_MEMORY, allocate a smaller texture), rather than a fallback that you have *no* control over (allocate a texture, fallback to 1fps software rendering with no recourse).

        Fortunately, modern OpenGL drivers try to avoid fallbacks unless you explicitly instruct them otherwise. See, for instance, NVEmulate.

        Comment


        • #24
          Why can't we architect an API that allows the driver to notify the application that a particular call is being done in software? This would provide the best end-user experience: the application first tries the "ideal" path; if the driver says that path is falling back to software, then the application can (at their option) attempt to do something else that might not fall back, or it can continue on with the software path, to the user's detriment.

          It's just a special case of error handling then, and it can be dealt with the same way that non-fatal exceptions / errors are dealt with in whichever framework you're in. The 3d engine can do a simple scratch test of all the rendering functionality on startup to determine where the software fallback minefield is, then activate the required workarounds.

          This would increase complexity for both the driver (driver developers would have to notify internal APIs of where the software fallbacks are) and the app (app developers would have to think of possible alternate paths for each potential fallback scenario), but the user would win in the end. And it would get rid of really retarded things you see out in the field, like PlaneShift's hardware presets list that attempts to grep your OpenGL vendor string, figure out what hardware you have, and special-case the rendering paths based on what it thinks your driver can do. That kind of crap is unfortunately necessary in a world where the driver does not give you any useful information as an app developer. The problem with this strategy is that the information becomes outdated almost as soon as it is released. A new chip comes out. A new driver is released with improved features, or new bugs that require a different path to be used. The Mesa devs decide to change the vendor string. The app gets confused whether you're using fglrx or the open source drivers. And on and on and on -- these scenarios crop up constantly in this hackish system.

          What we need in the 3d space is something like what is mostly a solved problem in the audio space: negotiation. In gstreamer, you have caps negotiation between two elements to ensure that element A can be linked to element B if at all possible. With audio, all you need to figure out is what sample format the data has to be transmitted in. With 3d rendering, the decision points are more numerous and the variables are more complicated, but the process should be the same. In real-time 3d, though, you don't always need exactly the functionality you ask for. For instance, if some card doesn't support anisotropic filtering but it does support trilinear, it is not a fatal error to have to switch from anisotropic to trilinear. Your attentive users may notice the quality degradation, but I bet they'd rather have 45 fps with trilinear than 0.5 fps with software anisotropic. Apply the same reasoning for any other potential fallback scenario. Other domains (networking, databases) have a lot of error cases; it's only fair that real-time 3d should too.

          Comment


          • #25
            What we need in the 3d space is something like what is mostly a solved problem in the audio space: negotiation.
            This has been suggested many times over at opengl.org but the OpenGL ARB is completely against this idea. OpenGL is consciously designed with a "create a resource and check if it succeeded" mindset (i.e. you cannot ask the hardware "can I create this resource? yes/no" or "do you actually support this? yes/no" beforehand(*)). Moreover, OpenGL doesn't use callbacks (consciously, again), so the notification mechanism you suggest cannot work: as soon as you create the problematic resource you've already fell down to software rendering without any way to intercept or fix that.

            This is one of those legacy design decisions that OpenGL is still carrying to this day and make developers' lives harder. The sad part is that OpenGL *could* have been a much cleaner API if the original 2.0 (3dlabs) or 3.0 (Long Peaks) proposals had gone through. Sometimes backwards compatibility is a heavy burden.

            (*) with the exception of proxy textures that are fundamentally broken anyway

            Comment


            • #26
              (I hate the edit limit)

              Which brings us to this:
              Why can't we architect an API that allows the driver to notify the application that a particular call is being done in software? This would provide the best end-user experience: the application first tries the "ideal" path; if the driver says that path is falling back to software, then the application can (at their option) attempt to do something else that might not fall back, or it can continue on with the software path, to the user's detriment.
              This is impossible given the current OpenGL design, as long as the driver support transparent fallbacks. Get fallbacks out of the way and well-written applications can work with the rest.

              I know I am going to be trolled for saying this again, but OpenGL is not a particularly good API by 2010 standards. Back in 199x it was great compared to the competition: simple, fast, with wider hardware/platform support and ambitious extensions. The problem is that the competition has moved on since then, leaving OpenGL to struggle with its long legacy.

              OpenGL 2.0 would have fixed that by killing the fixed-function pipeline in favor of shaders. The ARB deemed backwards support too useful and overturned the 3dlabs proposal.

              OpenGL 3.0 would have dragged the API screaming and kicking into the modern world. Khronos again deemed backwards support too useful and overturned the original proposal. It is said that Nvidia was (one of) the strongest opposers to the Long Peaks overhaul.

              What we are left with is a legacy-ridden API that plays a crucial role in our software ecosystem. We cannot replace it, we cannot fix it, we have to put our feet to the ground and endure.

              As someone said for C++ before, it's as if there's a simpler, cleaner API trying to come out of the mess.

              /Rant

              Comment


              • #27
                for me, there can be 2 opengl standards
                pure HW and mixed SW/HW

                Comment


                • #28
                  Originally posted by NomadDemon View Post
                  for me, there can be 2 opengl standards
                  pure HW and mixed SW/HW
                  The OpenGL standard doesn't dictate the implementation details. Maybe you meant to say drivers?

                  In that case, a pure SW driver might actually be faster than a HW/SW combo, especially for OpenGL 3.x and beyond.

                  Comment


                  • #29
                    implementation i mean

                    truly? i dont care.. I just want to make it work fast, stable, and with no problems :< right now cant play even fear or CS, cos AMD crash on CS1.6, nvidia not..
                    fear dont even start...
                    and many other stuff i want to just play/do :< 5 fps isnt good result for radeon 4850 even for crysis

                    Comment


                    • #30
                      Crysis needs at least Windows Vista, not sure about the others.

                      Which version of Windows are you trying to run them with?

                      Comment

                      Working...
                      X