Page 3 of 4 FirstFirst 1234 LastLast
Results 21 to 30 of 38

Thread: GPU Software Fallbacks: What's Better?

  1. #21
    Join Date
    Jan 2011
    Posts
    20

    Default Vote for SW fallback

    I vote for it with the requirement that a popup comes up that notifies you or even allows you to downgrade OpenGL. If you look at the BeagleBoard site, there was a project proposal to enable OpenGL via a DSP. For Raspbery it makes sense.

  2. #22
    Join Date
    Sep 2011
    Posts
    688

    Default

    You won't be able to bring up a dialog while the application is running, and slamming text on top of the rendered image could be a very bad idea if it obscures what ever it is supposed to render. AMD does do this though with unsupported graphics cards.

    If it was turned in to an option to have software fallback it would need to be handled by a control panel that you set things up in before running the application.

  3. #23
    Join Date
    Dec 2007
    Posts
    2,371

    Default

    Quote Originally Posted by stan View Post
    Of course software fallbacks are the way to go! I can't believe Gallium3D doesn't have them! After all, you can always chose to turn the software fallback off if you want speed and visual mistakes, but if you don't have the software fallback then you're stuck with the visual mistakes or black sc. So swrast gives users a choice, which is a good thing.
    Gallium was designed around the idea (from DX), that the GPU will be able to support in hardware, all of the features required for the API exposed. If the hardware does not expose enough hardware features for that API, you shouldn't expose it. It mainly targets DX10 class hardware, which is fine for PC hardware, but can be tricky for certain embedded parts. That said, most recent GL ES hardware supports enough features that it's generally not a problem.

  4. #24
    Join Date
    Jan 2012
    Posts
    62

    Default

    Quote Originally Posted by BlackStar View Post
    Let me try to make this as clear as possible:

    Give us GPU feature levels.
    Give us GPU feature levels.
    Give us GPU feature levels.
    Give us GPU feature levels.
    Give us GPU feature levels.

    We have been clamoring for this since 2003. Khronos have one final chance to get this right in OpenGL 5.0 - but they will most likely !@#$ this up again, like they did with OpenGL 3.0 and 4.0.

    The writing is on the wall. Between Metal, Mantle and D3D12, OpenGL will have to adapt or die.
    From MSDN
    "A feature level does not imply performance, only functionality. Performance is dependent on hardware implementation."


    So, that sounds exactlly like the same thing as implementing all of OpenGL 3 in software.

  5. #25
    Join Date
    Dec 2012
    Posts
    534

    Default

    Quote Originally Posted by Zan Lynx View Post
    From MSDN
    "A feature level does not imply performance, only functionality. Performance is dependent on hardware implementation."


    So, that sounds exactlly like the same thing as implementing all of OpenGL 3 in software.
    The problem is, when using OpenGL, you are writing hardware accelerated graphics code. If you start software rendering a hardware access API you are violating that expectation.

    I don't break out GLSL because I think its fun. I break it out because I need it because the host cpu is insufficient for what I'm trying to do, and I need the architecture of graphics hardware. There is a reason you are using OpenGL in the first place.

  6. #26
    Join Date
    Oct 2007
    Location
    Under the bridge
    Posts
    2,144

    Default

    Quote Originally Posted by Zan Lynx View Post
    From MSDN
    "A feature level does not imply performance, only functionality. Performance is dependent on hardware implementation."


    So, that sounds exactlly like the same thing as implementing all of OpenGL 3 in software.
    No. You couldn't be any more wrong even if you tried.

    A DX level is supported if and only if it can be accelerated in hardware. If a GPU exposes feature level 11_0, then I can use all 11_0 features without going through a software fallback. If the GPU supports most of 11_0, except for a single feature, then that GPU will not advertise feature level 11_0.

    If a IHV tries to pull the shenanigans discussed here, ergo advertize feature level 11_0 but only support e.g. 10_1 in hardware, then Microsoft will simply reject their drivers during the certification process. It's all or nothing.

    This is a good thing both for developers and for users.

  7. #27
    Join Date
    Sep 2011
    Posts
    688

    Default

    How are feture levels any better then gl versions?

  8. #28
    Join Date
    Aug 2012
    Location
    Pennsylvania, United States
    Posts
    1,892

    Default

    Quote Originally Posted by AJenbo View Post
    How are feture levels any better then gl versions?
    Quote Originally Posted by BlackStar View Post
    No. You couldn't be any more wrong even if you tried.

    A DX level is supported if and only if it can be accelerated in hardware. If a GPU exposes feature level 11_0, then I can use all 11_0 features without going through a software fallback. If the GPU supports most of 11_0, except for a single feature, then that GPU will not advertise feature level 11_0.

    If a IHV tries to pull the shenanigans discussed here, ergo advertize feature level 11_0 but only support e.g. 10_1 in hardware, then Microsoft will simply reject their drivers during the certification process. It's all or nothing.

    This is a good thing both for developers and for users.
    Read the above. With OpenGL you can claim "Support" if you manage to back the feature up with software rendering. Feature levels mandate that if you say you "support it" then you HAVE TO be able to do it ALL in-hardware. "Can't do it in hardware? Too bad for you, stop lying and screw off."

  9. #29
    Join Date
    Sep 2011
    Posts
    688

    Default

    Quote Originally Posted by Ericg View Post
    Read the above. With OpenGL you can claim "Support" if you manage to back the feature up with software rendering. Feature levels mandate that if you say you "support it" then you HAVE TO be able to do it ALL in-hardware. "Can't do it in hardware? Too bad for you, stop lying and screw off."
    So not really any better, infact GL might give a better result in some cases.

  10. #30
    Join Date
    Dec 2012
    Posts
    534

    Default

    Quote Originally Posted by AJenbo View Post
    So not really any better, infact GL might give a better result in some cases.
    GL is better in most cases in theory because it supports extensions. If you don't get a, say, OpenGL 4.2 context for some feature you want, you can still poll if the extensions you use are available on their own.

    In DirectX you either have complete version support or not. It makes the development more straightforward, but you can more finely optimize an OpenGL game across all hardware if you conditionally use optimizations that may or may not be available (like indirect draws).

    In practice, it takes a lot of extra work to do that, it means vendors are lazy about implementing whole versions because they can cherry pick the extensions to implement, and the disparate implementations across vendors break all over the place. In DX's advantage, you can do more broad engine branching - tesselation? DX11. Geometry? DX10. Otherwise 9. With GL it could be any version from 2.1 to 4.4 with any number of features missing.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •