Announcement

Collapse
No announcement yet.

Vulkan 1.2.162 Released With Ray-Tracing Support Promoted

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Kemosabe View Post
    I know it's not hip and trendy but is there any chance that OpenGL will get an equivalent extension too at some point?
    Unlikely. Nobody is working on this, and nobody wants to work on that. It is technically possible, but as I said, nobody is interested in this really. Use Vulkan in new titles instead.

    Comment


    • #12
      Originally posted by cl333r View Post

      I know what Multi-GPU means, but not PCIE pooling. Multi-GPU is a core feature of Vulkan, you don't have to "enable" it or seek support from the driver vendor. It's up to the application writers to decide whether they want multi-gpu to work with their applications, usually they don't enable it because it's extra work for the devs with little return on investment
      AMD published interesting results about this in 2018. I don't know if this is just PR, special case, or actually applicable in majority of games. It's worth a look though: https://community.amd.com/t5/blogs/r...ge/ba-p/417273 It would be useful if someone independent could testing the game with a multi-GPU setup or mGPU as they call it (and using the same drivers). I expected more people to cover this topic, but it doesn't look like many care at the moment.

      You are correct about it being a huge investment. Looking back at when we had single threaded CPU games we said similar things. I know it's not the same as back then yet many of the same processes needs to be followed. For example the OS/drivers, the low level APIs and finally the applications need to support the hardware (Nvidia's MCM research paper mentions this too). Luckily we are much further ahead than what we had with OpenGL <= 4.4 and D3D <= 11.

      (that is, multi-GPU is overrated, again unless you have a special case).
      I've been wondering about this over the past few years. IIRC both Nvidia and AMD is moving towards a modular GPU cores (aka MCM). Not sure if we will see this for gaming or just compute? Nvidia published research about it in 2017 and now we are seeing some leaks suggesting that Hopper GPUs are MCM based (the successor to Ampere). I would be surprised if games suddenly supported multi-GPU setups ... we are still seeing games like Cities Skylines, Civilization V, ARMA 3, Path of Exile, or any game based on older versions of Unity engine which has extremely bad CPU multi-threading support. These are complex problems no doubt. Many developers do not have enough time or incentive to solve these problems. It usually takes one or two games that does it properly to inspire others to do the same. Sadly if there's no demand for this companies are less likely to invest. Look at strange brigade for example, AMD tried it but nobody is talking about it. Most of us tech-savvy people don't even know that someone has tried it.

      Anyway these are just random thoughts. The MCM chips will probably be exposed as a single GPU and will make the manufacturing process cheaper rather than improving gaming experience. Still it would be funny and awesome if you could use modular vendor independent GPUs to efficiently render the same game.

      Here's some good and bad references:

      UPDATE: I've edited this blog too many times because I always think I'm done, but then another idea comes up. *sigh* But I should be done now. With AMD's semi-recent announcement of their server processors using the so-called "Chiplet" design, I thought it'd be a good idea to talk about how this ...

      NVIDIA's upcoming next-generation Hopper GPU has leaked out and represents a huge step up in performance over Turing and Ampere graphics cards.

      Comment


      • #13
        Originally posted by Jabberwocky View Post
        I wasn't a game developer but I did follow full OpenGL and Vulkan tutorials and some simple apps. In Vulkan using multiple GPUs should be significantly more efficient than in OpenGL because everything is separated and explicitly controlled by the programmer (the queues, devices, command buffers..) but I don't know if it's enough to change the landscape after game engines get rewritten with Vulkan-grade APIs as first class citizens (unlike now where afaik game engines were just changed to support Vulkan).
        For example I never tried it because I don't have 2 video cards supporting Vulkan (and I don't remember if both of them have to be of the same type which is usually the case under OpenGL).

        Comment


        • #14
          Originally posted by cl333r View Post
          I wasn't a game developer but I did follow full OpenGL and Vulkan tutorials and some simple apps. In Vulkan using multiple GPUs should be significantly more efficient than in OpenGL because everything is separated and explicitly controlled by the programmer (the queues, devices, command buffers..) but I don't know if it's enough to change the landscape
          AFAIK Vulkan only has Strange Brigade (see my previous post). D3D12 has many titles that are doing exactly this. For example https://hardforum.com/threads/dx12-m...aider.1967930/

          I am not a professional game dev either. During my school years I made a fully functional 3D game with physics and multi-player support. I was the only dev that worked on it but received help with maths problems from my brother (trigonometry mostly) and a ton of assistance from game dev community. This was ~2006, as you know many things have changed since then.

          after game engines get rewritten with Vulkan-grade APIs as first class citizens (unlike now where afaik game engines were just changed to support Vulkan).
          For example I never tried it because I don't have 2 video cards supporting Vulkan (and I don't remember if both of them have to be of the same type which is usually the case under OpenGL).
          I'm glad you are aware of this. Most end-users think wrapping functions and designing from the ground up is the same thing. It's really frustrating trying to explain this concept to the average gamer. You are also correct about cards having to be identical. In some cases even different BIOS versions can cause problems.



          The sniper elite video is good show case from D3D11 to D3D12.
          D3D11 (SLI) you had 77 FPS with highest single core at 58% and average of all cores at 30%
          D3D12 (mGPU) gave 113.8 FPS with highest single core at 47% and average of all cores was 25%

          It's basically 3 generations worth of improvement while using the same hardware. Truly amazing!

          The FPS is much better, but looking at utilization is also important. Many games like Arma 3 (see previous post) start to lag under intense conditions, fighting tanks in a town for example, when you look at your CPU and GPU utilization it's sometimes below 50% so it's just getting blocked. The results from mGPU benchmarks in sniper elite and others show 98-100% GPU usage of both GPUs with improved FPS and non of the CPU cores are reaching 100%. This is exactly what want would expect from running a 3D game using modern hardware and drivers.

          Here's some examples of other games: https://www.reddit.com/r/Amd/comment...5700_rx5700xt/

          Edit: I some how missed g'old Steve's post. Gamer's Nexus have tested AMD and Nvidia together back in 2016 (he called it SLIFire ) there's some useful technical info in that post. https://www.gamersnexus.net/game-ben...icit-multi-gpu I still don't know why this isn't talked about more.
          Last edited by Jabberwocky; 25 November 2020, 07:27 AM.

          Comment

          Working...
          X