Announcement

Collapse
No announcement yet.

PRIME DRI2 Offloading Pulled Into X.Org Server

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • liam
    replied
    Originally posted by Serafean View Post
    from http://electronicdesign.com/products...ra-of-ras35149 : From this I deduce that PCIe hotplugging is possible.
    However, when dealing with switchable graphics, for which I think that VGA_Switcheroo is enough. Riht now I don't see a model when a GPU would be hotplugged.

    And that's what I get for referencing wikipedia.
    Thanks for the reference, and I stand corrected.

    Leave a comment:


  • Serafean
    replied
    from http://electronicdesign.com/products...ra-of-ras35149 :
    PCI Express, since its inception, was designed to comprehend hot-plug functionality. As such, hot-plug registers are part of PCIe's capabilities, providing the operating system with a standard hot-plug hardware register interface accessible through configuration access on the PCIe bus. PCI Express also defines a standard usage model by defining the hot-plug capabilities required of hardware at a base architectural level. The native support for hot-plug control enables innovative server module form factors to be inserted or removed under power without requiring that the chassis be opened.

    In a PCIe-based server system, hot-plug slots may be sourced either from the chipset or the downstream ports of a switch. As PCIe is a point-to-point bus, switches are usually required for slot expansion and creation because of the limited number of ports of the root complex. These switches appear to the software as PCI-to-PCI bridges, and each port implementing a slot that is hot-plug capable will contain its own set of hot-plug registers in the bridge configuration space. These registers report the presence or absence of defined hot-plug mechanisms to software. They contain control for power as well as indicators on the slot, along with notification of card insertion/removal, latch open/close and of the attention button press. The software is notified by sending an interrupt upstream to the root complex. The notification option is implementation-dependant.
    From this I deduce that PCIe hotplugging is possible.
    However, when dealing with switchable graphics, for which I think that VGA_Switcheroo is enough. Riht now I don't see a model when a GPU would be hotplugged.

    Leave a comment:


  • liam
    replied
    Originally posted by curaga View Post
    At least some pcie must support hotplug, else how do you explain thunderbolt? (pci-e via a different cable, hotpluggable)
    My guess is that thunderbolt employs a dummy device. When you attach something, the dummy device just appears to move from a low power state to an active state.

    Leave a comment:


  • curaga
    replied
    At least some pcie must support hotplug, else how do you explain thunderbolt? (pci-e via a different cable, hotpluggable)

    Leave a comment:


  • sandain
    replied
    Originally posted by liam View Post
    Does anyone understand how they manage the gpu hotplug? My understanding of PCIE is that it's architecture doesn't support those kinds of changes while running. Are all the devices actually activated on boot, but then unused ones put into the lowest power state until needed?
    I don't know about PCIe hotplug support.. However, I'm farily sure the point of this hotplug support is to allow for USB graphics devices to be hotplugged.

    Leave a comment:


  • liam
    replied
    Originally posted by phoronix View Post
    Phoronix: PRIME DRI2 Offloading Pulled Into X.Org Server

    The X.Org Server 1.13 release for this September just got a lot more exciting now that David Airlie's patch-set for providing initial PRIME DRI2 off-loading and other modern GPU functionality was merged this weekend...

    http://www.phoronix.com/vr.php?view=MTEzNjE
    Does anyone understand how they manage the gpu hotplug? My understanding of PCIE is that it's architecture doesn't support those kinds of changes while running. Are all the devices actually activated on boot, but then unused ones put into the lowest power state until needed?

    Leave a comment:


  • Jonno
    replied
    Originally posted by Serafean View Post
    Another dumb question: I have seen SLI/Crossfire mentioned along with this work, I don't really see how it could help there.
    Right now, not at all, but in the long run the buffer sharing part of this is one of several required steps to get SLI/Crossfire working.

    Originally posted by uid313 View Post
    What about GPU switching, GPU offloading, multi-GPU, etc for Wayland?
    All kernel, libdrm and mesa work is shared, and the rest is (comparably) trivial to do in Wayland too when you already got it working once (in xorg). You can't copy-paste code from xorg to wayland, but most architecture design work can be reused.
    Last edited by Jonno; 09 July 2012, 06:06 PM.

    Leave a comment:


  • uid313
    replied
    What about Wayland?

    What about GPU switching, GPU offloading, multi-GPU, etc for Wayland?

    Leave a comment:


  • Serafean
    replied
    I believe that the answer to this question is: Yes, dual-GPU laptops without a MUX should be usable with X server 1.13... probably with a bit of config work
    This is fantastic news!

    Unforunately, or fortunately, I don't have any hardware of my own to test this on, although maybe it's possible to get this working using the HD4200 IGP in my 785G motherboard for display along with my Radeon 6850 acting as the renderer/offload target.
    As I understand it, this framework is very generic... This means that the offloading can be done in any way you want, no? (meaning IGP renders, 6850 displays; or the other way around).

    This is what I love about linux/opensource : it usually takes longer, but when something is implemented, it feels as though no corners were cut (meaning it works in a very generic way, leaving a lot of options)

    Now I guess we need to wait for DEs to implement the new RandR protocol to make it user-friendly...

    Another dumb question: I have seen SLI/Crossfire mentioned along with this work, I don't really see how it could help there.

    Leave a comment:


  • Veerappan
    replied
    Originally posted by Serafean View Post
    KUDOS! We all love you

    However,
    I'm not completely sure of the immediate implications : I know that this means one GPU can render, while the other displays. I know this works wih displaylink multiseat devices. (as seen in the video). Does this also mean that dualGPU laptops (non-muxed) will work with X server 1.13? Maybe someone could sum it up, I got a bit lost among all the articles.

    Thanks.
    I believe that the answer to this question is: Yes, dual-GPU laptops without a MUX should be usable with X server 1.13... probably with a bit of config work

    In this case, you'd be using a dedicated Nvidia/AMD/etc GPU to render, and then that GPU would forward the resulting buffer to the built-in Intel/whatever IGP for ?composition? and display. As long as the drivers for both video devices support sharing buffers in the correct manner, this can happen (I'm assuming that this probably requires GEM/TTM to work). I'm guessing that there's still user-configuration required to determine which cards to use when... A demo video from a while back hinted at environment variable changes to determine which card does the rendering work, but eventually this could probably be automated in some manner.

    Unforunately, or fortunately, I don't have any hardware of my own to test this on, although maybe it's possible to get this working using the HD4200 IGP in my 785G motherboard for display along with my Radeon 6850 acting as the renderer/offload target.

    Leave a comment:

Working...
X