Page 1 of 2 12 LastLast
Results 1 to 10 of 14

Thread: PRIME DRI2 Offloading Pulled Into X.Org Server

  1. #1
    Join Date
    Jan 2007
    Posts
    14,561

    Default PRIME DRI2 Offloading Pulled Into X.Org Server

    Phoronix: PRIME DRI2 Offloading Pulled Into X.Org Server

    The X.Org Server 1.13 release for this September just got a lot more exciting now that David Airlie's patch-set for providing initial PRIME DRI2 off-loading and other modern GPU functionality was merged this weekend...

    http://www.phoronix.com/vr.php?view=MTEzNjE

  2. #2
    Join Date
    Oct 2008
    Posts
    3,079

    Default I don't think that's quite right

    patches he's been viciously working on
    voracious?

  3. #3
    Join Date
    Dec 2011
    Posts
    145

    Default

    KUDOS! We all love you

    However,
    I'm not completely sure of the immediate implications : I know that this means one GPU can render, while the other displays. I know this works wih displaylink multiseat devices. (as seen in the video). Does this also mean that dualGPU laptops (non-muxed) will work with X server 1.13? Maybe someone could sum it up, I got a bit lost among all the articles.

    Thanks.

  4. #4

    Default

    So will this ongoing GPU work eventually help Nvidia Optimus users?




    Gaming News - www.gamingonlinux.com

  5. #5
    Join Date
    Nov 2008
    Location
    Madison, WI, USA
    Posts
    864

    Default

    Quote Originally Posted by Serafean View Post
    KUDOS! We all love you

    However,
    I'm not completely sure of the immediate implications : I know that this means one GPU can render, while the other displays. I know this works wih displaylink multiseat devices. (as seen in the video). Does this also mean that dualGPU laptops (non-muxed) will work with X server 1.13? Maybe someone could sum it up, I got a bit lost among all the articles.

    Thanks.
    I believe that the answer to this question is: Yes, dual-GPU laptops without a MUX should be usable with X server 1.13... probably with a bit of config work

    In this case, you'd be using a dedicated Nvidia/AMD/etc GPU to render, and then that GPU would forward the resulting buffer to the built-in Intel/whatever IGP for ?composition? and display. As long as the drivers for both video devices support sharing buffers in the correct manner, this can happen (I'm assuming that this probably requires GEM/TTM to work). I'm guessing that there's still user-configuration required to determine which cards to use when... A demo video from a while back hinted at environment variable changes to determine which card does the rendering work, but eventually this could probably be automated in some manner.

    Unforunately, or fortunately, I don't have any hardware of my own to test this on, although maybe it's possible to get this working using the HD4200 IGP in my 785G motherboard for display along with my Radeon 6850 acting as the renderer/offload target.

  6. #6
    Join Date
    Dec 2011
    Posts
    145

    Default

    I believe that the answer to this question is: Yes, dual-GPU laptops without a MUX should be usable with X server 1.13... probably with a bit of config work
    This is fantastic news!

    Unforunately, or fortunately, I don't have any hardware of my own to test this on, although maybe it's possible to get this working using the HD4200 IGP in my 785G motherboard for display along with my Radeon 6850 acting as the renderer/offload target.
    As I understand it, this framework is very generic... This means that the offloading can be done in any way you want, no? (meaning IGP renders, 6850 displays; or the other way around).

    This is what I love about linux/opensource : it usually takes longer, but when something is implemented, it feels as though no corners were cut (meaning it works in a very generic way, leaving a lot of options)

    Now I guess we need to wait for DEs to implement the new RandR protocol to make it user-friendly...

    Another dumb question: I have seen SLI/Crossfire mentioned along with this work, I don't really see how it could help there.

  7. #7
    Join Date
    Dec 2011
    Posts
    2,021

    Default What about Wayland?

    What about GPU switching, GPU offloading, multi-GPU, etc for Wayland?

  8. #8
    Join Date
    Nov 2008
    Posts
    77

    Default

    Quote Originally Posted by Serafean View Post
    Another dumb question: I have seen SLI/Crossfire mentioned along with this work, I don't really see how it could help there.
    Right now, not at all, but in the long run the buffer sharing part of this is one of several required steps to get SLI/Crossfire working.

    Quote Originally Posted by uid313 View Post
    What about GPU switching, GPU offloading, multi-GPU, etc for Wayland?
    All kernel, libdrm and mesa work is shared, and the rest is (comparably) trivial to do in Wayland too when you already got it working once (in xorg). You can't copy-paste code from xorg to wayland, but most architecture design work can be reused.
    Last edited by Jonno; 07-09-2012 at 06:06 PM.

  9. #9
    Join Date
    Jan 2009
    Posts
    1,331

    Default

    Quote Originally Posted by phoronix View Post
    Phoronix: PRIME DRI2 Offloading Pulled Into X.Org Server

    The X.Org Server 1.13 release for this September just got a lot more exciting now that David Airlie's patch-set for providing initial PRIME DRI2 off-loading and other modern GPU functionality was merged this weekend...

    http://www.phoronix.com/vr.php?view=MTEzNjE
    Does anyone understand how they manage the gpu hotplug? My understanding of PCIE is that it's architecture doesn't support those kinds of changes while running. Are all the devices actually activated on boot, but then unused ones put into the lowest power state until needed?

  10. #10
    Join Date
    Oct 2007
    Posts
    91

    Default

    Quote Originally Posted by liam View Post
    Does anyone understand how they manage the gpu hotplug? My understanding of PCIE is that it's architecture doesn't support those kinds of changes while running. Are all the devices actually activated on boot, but then unused ones put into the lowest power state until needed?
    I don't know about PCIe hotplug support.. However, I'm farily sure the point of this hotplug support is to allow for USB graphics devices to be hotplugged.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •