Announcement

Collapse
No announcement yet.

GPU Software Fallbacks: What's Better?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Vote for SW fallback

    I vote for it with the requirement that a popup comes up that notifies you or even allows you to downgrade OpenGL. If you look at the BeagleBoard site, there was a project proposal to enable OpenGL via a DSP. For Raspbery it makes sense.

    Comment


    • #22
      You won't be able to bring up a dialog while the application is running, and slamming text on top of the rendered image could be a very bad idea if it obscures what ever it is supposed to render. AMD does do this though with unsupported graphics cards.

      If it was turned in to an option to have software fallback it would need to be handled by a control panel that you set things up in before running the application.

      Comment


      • #23
        Originally posted by stan View Post
        Of course software fallbacks are the way to go! I can't believe Gallium3D doesn't have them! After all, you can always chose to turn the software fallback off if you want speed and visual mistakes, but if you don't have the software fallback then you're stuck with the visual mistakes or black sc. So swrast gives users a choice, which is a good thing.
        Gallium was designed around the idea (from DX), that the GPU will be able to support in hardware, all of the features required for the API exposed. If the hardware does not expose enough hardware features for that API, you shouldn't expose it. It mainly targets DX10 class hardware, which is fine for PC hardware, but can be tricky for certain embedded parts. That said, most recent GL ES hardware supports enough features that it's generally not a problem.

        Comment


        • #24
          Originally posted by BlackStar View Post
          Let me try to make this as clear as possible:

          Give us GPU feature levels.
          Give us GPU feature levels.
          Give us GPU feature levels.
          Give us GPU feature levels.
          Give us GPU feature levels.

          We have been clamoring for this since 2003. Khronos have one final chance to get this right in OpenGL 5.0 - but they will most likely !@#$ this up again, like they did with OpenGL 3.0 and 4.0.

          The writing is on the wall. Between Metal, Mantle and D3D12, OpenGL will have to adapt or die.
          From MSDN
          "A feature level does not imply performance, only functionality. Performance is dependent on hardware implementation."


          So, that sounds exactlly like the same thing as implementing all of OpenGL 3 in software.

          Comment


          • #25
            Originally posted by Zan Lynx View Post
            From MSDN
            "A feature level does not imply performance, only functionality. Performance is dependent on hardware implementation."


            So, that sounds exactlly like the same thing as implementing all of OpenGL 3 in software.
            The problem is, when using OpenGL, you are writing hardware accelerated graphics code. If you start software rendering a hardware access API you are violating that expectation.

            I don't break out GLSL because I think its fun. I break it out because I need it because the host cpu is insufficient for what I'm trying to do, and I need the architecture of graphics hardware. There is a reason you are using OpenGL in the first place.

            Comment


            • #26
              Originally posted by Zan Lynx View Post
              From MSDN
              "A feature level does not imply performance, only functionality. Performance is dependent on hardware implementation."


              So, that sounds exactlly like the same thing as implementing all of OpenGL 3 in software.
              No. You couldn't be any more wrong even if you tried.

              A DX level is supported if and only if it can be accelerated in hardware. If a GPU exposes feature level 11_0, then I can use all 11_0 features without going through a software fallback. If the GPU supports most of 11_0, except for a single feature, then that GPU will not advertise feature level 11_0.

              If a IHV tries to pull the shenanigans discussed here, ergo advertize feature level 11_0 but only support e.g. 10_1 in hardware, then Microsoft will simply reject their drivers during the certification process. It's all or nothing.

              This is a good thing both for developers and for users.

              Comment


              • #27
                How are feture levels any better then gl versions?

                Comment


                • #28
                  Originally posted by AJenbo View Post
                  How are feture levels any better then gl versions?
                  Originally posted by BlackStar View Post
                  No. You couldn't be any more wrong even if you tried.

                  A DX level is supported if and only if it can be accelerated in hardware. If a GPU exposes feature level 11_0, then I can use all 11_0 features without going through a software fallback. If the GPU supports most of 11_0, except for a single feature, then that GPU will not advertise feature level 11_0.

                  If a IHV tries to pull the shenanigans discussed here, ergo advertize feature level 11_0 but only support e.g. 10_1 in hardware, then Microsoft will simply reject their drivers during the certification process. It's all or nothing.

                  This is a good thing both for developers and for users.
                  Read the above. With OpenGL you can claim "Support" if you manage to back the feature up with software rendering. Feature levels mandate that if you say you "support it" then you HAVE TO be able to do it ALL in-hardware. "Can't do it in hardware? Too bad for you, stop lying and screw off."
                  All opinions are my own not those of my employer if you know who they are.

                  Comment


                  • #29
                    Originally posted by Ericg View Post
                    Read the above. With OpenGL you can claim "Support" if you manage to back the feature up with software rendering. Feature levels mandate that if you say you "support it" then you HAVE TO be able to do it ALL in-hardware. "Can't do it in hardware? Too bad for you, stop lying and screw off."
                    So not really any better, infact GL might give a better result in some cases.

                    Comment


                    • #30
                      Originally posted by AJenbo View Post
                      So not really any better, infact GL might give a better result in some cases.
                      GL is better in most cases in theory because it supports extensions. If you don't get a, say, OpenGL 4.2 context for some feature you want, you can still poll if the extensions you use are available on their own.

                      In DirectX you either have complete version support or not. It makes the development more straightforward, but you can more finely optimize an OpenGL game across all hardware if you conditionally use optimizations that may or may not be available (like indirect draws).

                      In practice, it takes a lot of extra work to do that, it means vendors are lazy about implementing whole versions because they can cherry pick the extensions to implement, and the disparate implementations across vendors break all over the place. In DX's advantage, you can do more broad engine branching - tesselation? DX11. Geometry? DX10. Otherwise 9. With GL it could be any version from 2.1 to 4.4 with any number of features missing.

                      Comment

                      Working...
                      X