Announcement

Collapse
No announcement yet.

OpenGL vs. Vulkan For Older NVIDIA Kepler GPUs (Early 2017)

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by indepe View Post
    OpenGL doesn't scale to multiple cores, so it's on the way out.
    Gaming work is hard to scale across multiple cores also, since there is so much synchronization required in game play. It is done, of course, but it's not without its challenges. You can get more bang for your buck by removing inefficiencies within the API layer and driver, which, to their credit, the new APIs do well, much like AZDO in OpenGL, etc.

    Comment


    • #12
      Originally posted by johnc View Post

      I told everyone from the get-go that this whole low-level API craze was nothing more than a big marketing snow-job.

      Within a few years developers will be screaming for abstract APIs again.
      If that happen, they will be asking for something like Metal: an easy to use api, with great tools and low-level enough. Sadly is apple only.

      But even something like vulkan can get layers in the future than will make it easier to use, the possibilities are very wide. Performance-related, we are probably not seen its full potential yet.

      Comment


      • #13
        Originally posted by johnc View Post

        Gaming work is hard to scale across multiple cores also, since there is so much synchronization required in game play. It is done, of course, but it's not without its challenges. You can get more bang for your buck by removing inefficiencies within the API layer and driver, which, to their credit, the new APIs do well, much like AZDO in OpenGL, etc.
        Pretty much this. Games simply don't scale because of all the synchronization that has to be done. Period. You can get minor gains within certain parts of a game, but on the whole, their structure is serial in nature.

        Nevermind it's been ages since developers have been exposed to things like manual memory management; doesn't surprise me the first batch of DX12/Vulkan games were bug riddled messes at launch.

        Comment


        • #14
          Originally posted by gamerk2 View Post

          Pretty much this. Games simply don't scale because of all the synchronization that has to be done. Period. You can get minor gains within certain parts of a game, but on the whole, their structure is serial in nature.


          Games are more and more simulating the real world, and the real world is parallel in nature.

          Comment


          • #15
            Originally posted by indepe View Post
            [/SIZE]

            Games are more and more simulating the real world, and the real world is parallel in nature.
            Oh snap. Right, let's drop OpenGL like a bad habit and do everything in Vulkan then. /s

            Comment


            • #16
              Originally posted by bug77 View Post

              Oh snap. Right, let's drop OpenGL like a bad habit and do everything in Vulkan then. /s
              Or start with Vulkan in parallel...

              Comment


              • #17
                Originally posted by indepe View Post
                Games are more and more simulating the real world, and the real world is parallel in nature.
                Gaming is heavily synchronized though. When you click a button to fire a gun there's audio, video, AI, and network code that all has to collaborate. That kind of synchronization isn't cheap in terms of efficiency (or bug potential).

                It's not really parallel in the sense that you can have ten people individually counting objects and then at the end sum the ten sums to get a final sum.

                Gaming is one of those workloads where it's hard to just throw more cores at the problem.

                Comment


                • #18
                  Originally posted by johnc View Post

                  Gaming is heavily synchronized though. When you click a button to fire a gun there's audio, video, AI, and network code that all has to collaborate. That kind of synchronization isn't cheap in terms of efficiency (or bug potential).

                  It's not really parallel in the sense that you can have ten people individually counting objects and then at the end sum the ten sums to get a final sum.

                  Gaming is one of those workloads where it's hard to just throw more cores at the problem.
                  Separation of threads does require careful design and experience, and it will probably depend on the game which parts of it will benefit from using multiple cores (which areas of the game may be CPU-bound). I don't see why, in general, games should be especially difficult, except that their size doesn't make them good training projects. Engineers will need to gain a good amount of experience with smaller projects, and develop techniques that apply to the specific needs of a given game (unless some of that can be done in the gaming engine by a third party). However, once these techniques have been found, they are then available much more easily.

                  It seems more expensive to build higher frequency CPUs, while even less expensive CPUs seem to have more cores than are usually being used, currently. In so far as future games will want to use more CPU processing power, there doesn't seem to be much of an alternative, except trying to move more and more processing onto the GPU. That requires an even stricter parallelism. And given the efforts of "overclocking" CPUs, and liquid cooling, for example, there does seem to be an interest in more CPU processing power, so why not use those idle cores which are available for free?

                  Comment


                  • #19
                    You should not forget that many OpenGL engines use translation layers similar to wine. This translation (at least HLSL to GLSL) is often done on one core and is therefore limited by the speed/core. You rarely find engjnes with similar speed across multiple OS.

                    Comment


                    • #20
                      Originally posted by johnc View Post

                      I told everyone from the get-go that this whole low-level API craze was nothing more than a big marketing snow-job.

                      Within a few years developers will be screaming for abstract APIs again.
                      No, it's not gonna happen. FPS isn't the only thing. The fact that OpenGL drivers need different profiles for different games indicates that something is fundamentally wrong. Also, developers care more about ease of porting than total FPS.

                      Comment

                      Working...
                      X