Announcement

Collapse
No announcement yet.

NVIDIA Proposing New Linux API For Dynamic Mux Switching With Modern Dual-GPU Laptops

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by openminded View Post
    For me, is too late. I've been waiting for something like this for too long. No more Nvidia GPUs in my laptops.
    I'm also wondering why it takes more than a decade to resolve this issue.

    My previous Nvidia laptop had a GeForce GTX 560M (May 30th, 2011). Dual GPU support was so bad I refused to buy one until this issue was resolved.

    Comment


    • #12
      This is great for those few muxed gaming laptops. But I would like the experience to improve for muxless laptops (which from my understanding is the majority). Stuff like reverse PRIME for external monitor ports connected directly to the dGPU. That doesn't work properly. Less nvidia related crashes and hangs would be great too.

      Comment


      • #13
        Originally posted by darkbasic View Post

        Yet mesa developers said it was just because of a bad implementation and you could theoretically achieve way less penalty without all the disadvantages of a MUX.
        This sounds like an exaggeration to me because even highly optimised solution on Windows (which is what NVidia puts its priority on) also has a non trivial performance penalty (that 10-15% performance penalty was on Windows, not Linux).

        I mean you wont notice a difference doing desktop stuff, but it does show when gaming which is not uncommon considering dual GPU laptops tend to be gaming ones.
        Last edited by mdedetrich; 09 November 2022, 05:20 PM.

        Comment


        • #14
          Originally posted by mdedetrich View Post

          Indeed, high end dual GPU laptops tend to have a MUX because without you get a performance penalty when running off one of the GPU's due to have to route the display from GPU that is NOT connected to the display via the GPU that IS connected to the display. I think that LTT did a video on this and its roughly a ~10% performance penalty.
          Yeah. He touches on that at 12 minutes into The Dirty Way Manufacturers are Downgrading Your PC though he didn't talk about the specific numbers for mux vs. muxless in that one. (The video as a whole is about x8 vs. x16 memory modules and how they can be an up-to-25% performance hit that isn't listed on the spec sheet before you buy that gaming laptop and may change after the reviewers have received their review units.)

          Comment


          • #15
            Originally posted by ssokolow View Post

            Yeah. He touches on that at 12 minutes into The Dirty Way Manufacturers are Downgrading Your PC though he didn't talk about the specific numbers for mux vs. muxless in that one. (The video as a whole is about x8 vs. x16 memory modules and how they can be an up-to-25% performance hit that isn't listed on the spec sheet before you buy that gaming laptop and may change after the reviewers have received their review units.)
            Yeah there is another video where he goes into actual details on Mux along with some gaming benchmarks.

            Comment


            • #16
              How do they plan to put those gigantic bricks into a laptop and do they get batteries that can serve 600 watts?

              Comment


              • #17
                Originally posted by mdedetrich View Post

                This sounds like an exaggeration to me because even highly optimised solution on Windows (which is what NVidia puts its priority on) also has a non trivial performance penalty (that 10-15% performance penalty was on Windows, not Linux).

                I mean you wont notice a difference doing desktop stuff, but it does show when gaming which is not uncommon considering dual GPU laptops tend to be gaming ones.
                AFAIK it's because of the lack of Peer to Peer DMA on PCIe:
                Phoronix: Radeon Gallium3D Picks Up A Nice Performance Optimization For iGPU/dGPU PRIME Setups The AMD Radeon Gallium3D driver code today landed a nice optimization for benefiting PRIME setups with integrated and discrete Radeon GPUs... https://www.phoronix.com/scan.php?page=news_item&px=RadeonSI-PRIME-ASync-SDMA-Copy

                With Peer to Peer DMA and PCIe 5 x16 the bottleneck should be negligible.
                ## VGA ##
                AMD: X1950XTX, HD3870, HD5870
                Intel: GMA45, HD3000 (Core i5 2500K)

                Comment


                • #18
                  Originally posted by darkbasic View Post

                  AFAIK it's because of the lack of Peer to Peer DMA on PCIe:
                  https://www.phoronix.com/forums/foru...86#post1284286
                  With Peer to Peer DMA and PCIe 5 x16 the bottleneck should be negligible.
                  EDIT: I see what you mean, its direct memory access from one GPU to another. This would definitely help but since we are talking about laptops with PCIe5, we probably won't see this technology in consumer laptops for some time so in the meanwhile this is still useful for laptops that already have a Mux.

                  Its also going to take up PCIe lanes, which could become a bottleneck a problem if they are already being used (which is more likely on laptops due to having less lanes in the first place) and I am not sure if you need something physical to set it up, i.e. DMA over PCIe5 would probably work without problems on a standard desktop motherboard with 2 discrete GPU's but on a laptop with an iGPU? Peer to Peer DMA seems to be more catered to direct storage (i.e. putting data directly from NVME ssd to VRAM) and maybe compute, haven't ever seen it talked about between iGPU and another GPU.

                  An actual Mux is still the far superior solution.
                  Last edited by mdedetrich; 10 November 2022, 10:19 AM.

                  Comment


                  • #19
                    Originally posted by mdedetrich View Post

                    EDIT: I see what you mean, its direct memory access from one GPU to another. This would definitely help but since we are talking about laptops with PCIe5, we probably won't see this technology in consumer laptops for some time so in the meanwhile this is still useful for laptops that already have a Mux.

                    Its also going to take up PCIe lanes, which could become a bottleneck a problem if they are already being used (which is more likely on laptops due to having less lanes in the first place) and I am not sure if you need something physical to set it up, i.e. DMA over PCIe5 would probably work without problems on a standard desktop motherboard with 2 discrete GPU's but on a laptop with an iGPU? Peer to Peer DMA seems to be more catered to direct storage (i.e. putting data directly from NVME ssd to VRAM) and maybe compute, haven't ever seen it talked about between iGPU and another GPU.

                    An actual Mux is still the far superior solution.
                    PCIe5 is not technically required, I've mentioned it just because it has so much bandwidth to lessen any possible bottleneck once Peer to Peer DMA is in place.
                    ## VGA ##
                    AMD: X1950XTX, HD3870, HD5870
                    Intel: GMA45, HD3000 (Core i5 2500K)

                    Comment


                    • #20
                      Originally posted by darkbasic View Post

                      PCIe5 is not technically required, I've mentioned it just because it has so much bandwidth to lessen any possible bottleneck once Peer to Peer DMA is in place.
                      This I understand but the other points I made still apply.

                      Comment

                      Working...
                      X