Announcement

Collapse
No announcement yet.

NVIDIA Proposing New Linux API For Dynamic Mux Switching With Modern Dual-GPU Laptops

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • FireBurn
    replied
    Prime works great, please don't f*ck that up for those of us that don't use nVidia's sh*tty binary driver

    Leave a comment:


  • darkbasic
    replied
    Originally posted by toughy View Post

    Disadvantages of a MUX ? Would you care to ellaborate ?
    I'm sorry but I don't remember. A dev wrote something about it but I can't find it either.

    Leave a comment:


  • cj.wijtmans
    replied
    I wonder how AMD will handle it when zen4 all have igpu chips. Will it route desktop apu accelerated output to the dgpu display driver? In which case performance impact doesnt matter much but it could matter on terms of desktop responsiveness. Probably negligible though.

    Leave a comment:


  • toughy
    replied
    I realy hope this is not another API that will become proprietary to NVIDIA

    Leave a comment:


  • toughy
    replied
    Originally posted by darkbasic View Post

    ... achieve way less penalty without all the disadvantages of a MUX.
    Disadvantages of a MUX ? Would you care to ellaborate ?

    Leave a comment:


  • sarmad
    replied
    If the amount of efforts spent on making hybrid laptops work properly was instead spent on enabling dGPUs to scale down its power usage to the level of iGPUs we would've had a proper solution by now that saves power, complexity on both hardware and software, and space on the CPU that can be used instead for more cache or more cores, or simply just less heat.

    Leave a comment:


  • mdedetrich
    replied
    Originally posted by darkbasic View Post

    PCIe5 is not technically required, I've mentioned it just because it has so much bandwidth to lessen any possible bottleneck once Peer to Peer DMA is in place.
    This I understand but the other points I made still apply.

    Leave a comment:


  • darkbasic
    replied
    Originally posted by mdedetrich View Post

    EDIT: I see what you mean, its direct memory access from one GPU to another. This would definitely help but since we are talking about laptops with PCIe5, we probably won't see this technology in consumer laptops for some time so in the meanwhile this is still useful for laptops that already have a Mux.

    Its also going to take up PCIe lanes, which could become a bottleneck a problem if they are already being used (which is more likely on laptops due to having less lanes in the first place) and I am not sure if you need something physical to set it up, i.e. DMA over PCIe5 would probably work without problems on a standard desktop motherboard with 2 discrete GPU's but on a laptop with an iGPU? Peer to Peer DMA seems to be more catered to direct storage (i.e. putting data directly from NVME ssd to VRAM) and maybe compute, haven't ever seen it talked about between iGPU and another GPU.

    An actual Mux is still the far superior solution.
    PCIe5 is not technically required, I've mentioned it just because it has so much bandwidth to lessen any possible bottleneck once Peer to Peer DMA is in place.

    Leave a comment:


  • mdedetrich
    replied
    Originally posted by darkbasic View Post

    AFAIK it's because of the lack of Peer to Peer DMA on PCIe:
    https://www.phoronix.com/forums/foru...86#post1284286
    With Peer to Peer DMA and PCIe 5 x16 the bottleneck should be negligible.
    EDIT: I see what you mean, its direct memory access from one GPU to another. This would definitely help but since we are talking about laptops with PCIe5, we probably won't see this technology in consumer laptops for some time so in the meanwhile this is still useful for laptops that already have a Mux.

    Its also going to take up PCIe lanes, which could become a bottleneck a problem if they are already being used (which is more likely on laptops due to having less lanes in the first place) and I am not sure if you need something physical to set it up, i.e. DMA over PCIe5 would probably work without problems on a standard desktop motherboard with 2 discrete GPU's but on a laptop with an iGPU? Peer to Peer DMA seems to be more catered to direct storage (i.e. putting data directly from NVME ssd to VRAM) and maybe compute, haven't ever seen it talked about between iGPU and another GPU.

    An actual Mux is still the far superior solution.
    Last edited by mdedetrich; 10 November 2022, 10:19 AM.

    Leave a comment:


  • darkbasic
    replied
    Originally posted by mdedetrich View Post

    This sounds like an exaggeration to me because even highly optimised solution on Windows (which is what NVidia puts its priority on) also has a non trivial performance penalty (that 10-15% performance penalty was on Windows, not Linux).

    I mean you wont notice a difference doing desktop stuff, but it does show when gaming which is not uncommon considering dual GPU laptops tend to be gaming ones.
    AFAIK it's because of the lack of Peer to Peer DMA on PCIe:
    Phoronix: Radeon Gallium3D Picks Up A Nice Performance Optimization For iGPU/dGPU PRIME Setups The AMD Radeon Gallium3D driver code today landed a nice optimization for benefiting PRIME setups with integrated and discrete Radeon GPUs... https://www.phoronix.com/scan.php?page=news_item&px=RadeonSI-PRIME-ASync-SDMA-Copy

    With Peer to Peer DMA and PCIe 5 x16 the bottleneck should be negligible.

    Leave a comment:

Working...
X