Announcement

Collapse
No announcement yet.

Khronos Clarifies That Vulkan Multi-GPU Isn't Limited To Windows 10

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by duby229 View Post

    Initially I thought very similar, but now I've changed my mind after reading more about it. It seems like the way loads are spread across GPU's is very dependant on load. There is no possible way to make a generic method. It must be done at the game engine level because that is where information about the load is.
    One way I would figure multi-GPU to work is for it to be handled completely driver-side and for it to provide a "virtual" GPU for games to use. The virtual GPU would be provided by the driver and would combine multiple GPUs, handle frame sharing, load, and whatever else is needed to be handled.

    I'm guessing there's more to it than that though; that seems way too simple if it were that easy

    Comment


    • #12
      Originally posted by Espionage724 View Post

      One way I would figure multi-GPU to work is for it to be handled completely driver-side and for it to provide a "virtual" GPU for games to use. The virtual GPU would be provided by the driver and would combine multiple GPUs, handle frame sharing, load, and whatever else is needed to be handled.

      I'm guessing there's more to it than that though; that seems way too simple if it were that easy
      I agree, it can be done but nobody has done it yet. nVidia has gotten part way there with the windows drivers that can intercept PhysX and CSAA calls and send them to a different GPU for processin. It can't be that hard, the APIs are already designed to separate the developer from the details in how the hardware processes the work.

      Comment


      • #13
        Originally posted by Espionage724 View Post
        One way I would figure multi-GPU to work is for it to be handled completely driver-side and for it to provide a "virtual" GPU for games to use. The virtual GPU would be provided by the driver and would combine multiple GPUs, handle frame sharing, load, and whatever else is needed to be handled.
        The issue is that you are adding a middlemen that has to waste time to take decisions on how to split loads, and that to split loads in a decent way must know how the engine works in the first place.

        So, this thing probably needs another GPU to run a machine learning algorithm to tune itself to each game engine. Or it runs like total crap.


        EDIT: or, profiles, of course. Everyone loves profiling.
        Last edited by starshipeleven; 22 March 2017, 03:23 PM.

        Comment


        • #14
          Originally posted by Sidicas View Post
          I agree, it can be done but nobody has done it yet. nVidia has gotten part way there with the windows drivers that can intercept PhysX and CSAA calls and send them to a different GPU for processin.
          Yo dawg, u comparing a system that splits different function/API/whatever calls on different GPUs (which isn't exactly hard and requires no real intelligence in the program itself) to a system that load-balances the rendering on more than one GPU without any fucking idea of how the game engine works (i.e. sends this load that must be balanced) in the first place.

          Comment


          • #15
            Originally posted by Sidicas View Post

            I agree, it can be done but nobody has done it yet. nVidia has gotten part way there with the windows drivers that can intercept PhysX and CSAA calls and send them to a different GPU for processin. It can't be that hard, the APIs are already designed to separate the developer from the details in how the hardware processes the work.
            That's not really true at all. GPU designs have become much simpler and more parallel for like the last decade. There's a good reason for that too, it's because the game engine is -the- thing that is processing data, not the driver. The engine already knows what the data looks like and what the processes that need performed, but the driver has to interpret and calculate and figure out....

            Comment


            • #16
              Originally posted by Espionage724 View Post

              One way I would figure multi-GPU to work is for it to be handled completely driver-side and for it to provide a "virtual" GPU for games to use. The virtual GPU would be provided by the driver and would combine multiple GPUs, handle frame sharing, load, and whatever else is needed to be handled.

              I'm guessing there's more to it than that though; that seems way too simple if it were that easy
              But that's not a good idea at all. In fact what you are talking about is basically how GPU's worked for a long time. The reason why it's not still that way is because for a driver to be the main processing element there needs to be a whole crap load of special function hardware just to figure out what the hardware should be doing. On the other hand if the game engine is the main processing element, it already knows what it should be doing, and therefore hardware designers can eliminate a boat load of special function hardware. It more or less is -the- reason why GPU's are so much simpler and so much more parallel today than they used to be.

              Comment


              • #17
                Originally posted by pcxmac View Post
                I think I saw something on youtube (could have been Linus Tech Tips) about WDDM/Win10 multi-gpu, and the inference of it being windows 10 exclusive. Microsoft won't want it getting down to Win7 owners though, pretty sure they will find a way to keep that from happening.
                I also wonder the same thing, who is responsible to spreading the nonsense claims?
                I know Linus has said a lot of other nonsense about Vulkan.

                Originally posted by Sidicas View Post
                The developer still needs to enable multigpu for it to work. So long as this is a required action by the developer and not an automatic feature provided for all games, it is no better than SLI/Crossfire and the games I have been playing recently on both Windows and Linux aren't enabled for SLI/Crossfire and Vulcan isn't providing a solution to that, which is disappointing.
                It might be disappointing to you, but only because (clueless) people have given you the wrong expectations. Direct3D 12 and Vulkan multi-GPU is not going to make any game scale with any non-symmetric or symmetric GPU configuration, that is simply not technically possible. There will be two real options; AFR or splitting queues to different GPUs, both require the developers to actively design for it. For AFR to scale well, the game engine has to be designed to allow for two queues to be in flight at the same time, and to have limited dependencies and fences in the queues, which is used by post-processing among other things. Even in a couple of years, well scaling games are still going to be as rare as today.

                Originally posted by Sidicas View Post

                I agree, it can be done but nobody has done it yet. nVidia has gotten part way there with the windows drivers that can intercept PhysX and CSAA calls and send them to a different GPU for processin. It can't be that hard, the APIs are already designed to separate the developer from the details in how the hardware processes the work.
                Nonsense, CSAA is a mode of antialiasing, not specific "calls".

                Comment


                • #18
                  Originally posted by efikkan View Post
                  I also wonder the same thing, who is responsible to spreading the nonsense claims?
                  Khronos as it turns out. One of their slides mentioned Multi-GPU and a feature that only Win10 provides in the same breath, in a way that really made it look like it was required. At least one method can be used without this feature, but that wasn't said on the slide

                  Comment

                  Working...
                  X