Announcement

Collapse
No announcement yet.

DRI2 Offload Slaves, Output Slaves For September

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Gusar
    replied
    Originally posted by allquixotic View Post
    The LucidLogix Virtu MVP hardware on the motherboard can support cases like that.
    Cool.

    Though what I want to know is whether dma_buf prime will support this without LucidLogix Virtu. (not that I personally need this, my desktop has only one display, I'm just curious)

    Leave a comment:


  • allquixotic
    replied
    Originally posted by Gusar View Post
    Hmm, is it possible to disable a discrete PCIe card at runtime? That'd be interesting.

    Otherwise, there's another scenario I was thinking of - in case the discrete card can only handle two displays, could one use the integrated GPU as an output slave for a third display?
    The LucidLogix Virtu MVP hardware on the motherboard can support cases like that. You basically forget about which cards have which monitors plugged into them, and the operating system just makes all your displays come on seamlessly and they're all part of one big desktop; and you can still decide on a per-application basis, exactly which GPU you want to be rendering (the CPU, the discrete GPU, or both).

    I think for the "hybrid" mode rendering, where both CPU and GPU are involved, it intelligently picks one card (probably the one that's directly connected to the monitor it's outputting on) to do the simpler operations, but it distributes shader operations across the IGP and the GPU. This is probably because the shader pipeline is the easiest resource to distribute across GPUs as far as driver complexity and hardware complexity.
    Last edited by allquixotic; 09 June 2012, 10:20 AM.

    Leave a comment:


  • Gusar
    replied
    Originally posted by allquixotic View Post
    Will this work for systems that don't have "Optimus", like Ivy Bridge desktops that also happen to have a discrete GPU?
    Hmm, is it possible to disable a discrete PCIe card at runtime? That'd be interesting.

    Otherwise, there's another scenario I was thinking of - in case the discrete card can only handle two displays, could one use the integrated GPU as an output slave for a third display?

    Leave a comment:


  • allquixotic
    replied
    Will this work for systems that don't have "Optimus", like Ivy Bridge desktops that also happen to have a discrete GPU?

    Also, I guess this doesn't even scratch the surface of the LucidLogix Virtu MVP hardware... would that hardware support provide any advantage if it were enabled? From what I understand, the only thing it would provide that we don't already have after this work is merged is, you could run a single application being rendered from two cards simultaneously. So if you have a weak CPU and a weak discrete GPU, you could combine them together and get more FPS out of your app than either of the constituent GPUs could do alone. It sounds kind of nice, but in my case I just use my ridiculously fast HD7970 whenever I need oomph -- and the Intel 3770K when I don't.

    Oh, and the LucidLogix would also allow different applications to use different hardware within the same "X server" (on Windows it's on the same desktop). So you could create 4 hardware-accelerated windows on your desktop and have the top-left one only render on the IGP, the top-right only render on the GPU, the bottom-left render on both, etc. Can we do this without the LucidLogix hardware support? Or would we need that hardware?

    Leave a comment:


  • LLStarks
    replied
    Originally posted by airlied View Post
    the first implementation will just involve setting an env var, like DRI_PRIME=1, then I'm hopefully going to add a gnome-shell extension to make the same possible from a launcher or some sort.

    After that people can do what they like ;-)

    Dave.
    I'm sold. The user side of things is going to be far more awesome than I was expecting.

    I guess all that's left to do is have devs pester Nvidia to resubmit their EXPORT_SYMBOL patch. I need working VDPAU.
    Last edited by LLStarks; 08 June 2012, 09:38 PM.

    Leave a comment:


  • Nepenthes
    replied
    Originally posted by airlied View Post
    the first implementation will just involve setting an env var, like DRI_PRIME=1, then I'm hopefully going to add a gnome-shell extension to make the same possible from a launcher or some sort.

    After that people can do what they like ;-)

    Dave.
    Thanks for the straight answer !

    Leave a comment:


  • airlied
    replied
    Originally posted by snadrus View Post
    So youre using a low-power Intel card and you'll (your sofware's devs) have 2 options when they want greater rendering:
    #2. Enable the 2nd GPU as a renderslave and have both GPUs helping. Knowing our program will end & we will want to go back to the lower-power GPU.
    #3. Use a much-riskier GPU-switch, lose the additional render-power of the slower GPU (vs #2), then risky-switch back when done.

    What's the benefit of #3 outside:
    - to un-hotplug (remove) the "integrated" card (unlikely)
    - power-savings while running intensive rendering software

    Why spend valuable XOrg dev time on this?
    The switching case is necessary for some older MUXed systems and also,
    .
    Lots of laptops have outputs connected to the nvidia on docking stations, in order to use those outputs you have to really switch to running the nvidia card fully.
    (in some situations this also involves copying the panel frontbuffer back to the intel to display). Anything else would just suck. Eventually it might be possible with xinerama improvements to avoid this situation but its non-trivial.

    Also valuable Xorg dev time usage is decided by the developers and their employers, and both myself and my employer would quite like to have this project finished already!

    Leave a comment:


  • airlied
    replied
    Originally posted by Nepenthes View Post
    As I understand it, this is a great step forward, and getting these new features in a near future is very good news.
    But this is still no clear explanation about how these changes will affect end users experience.
    More precisely, about Optimus support with the nouveau driver : if it is possible (with GPU offloading in xserver 1.13) how do I ask my computer : "run this piece of software on the discrete GPU" ?

    I currently use Bumblebee (and I took a close look at the it works with VirtualGL and proprietary nvidia drivers) and I look forward to the day I will get the same level of functionnality in a non hackish style, with a working installation of xserver and mesa, and open source drivers.
    For now, even the Bumblebee team (I don't know if they have new informations about this issue) doesn't seem to have any idea how the xserver changes will help the end user in the case of optimus laptops.
    the first implementation will just involve setting an env var, like DRI_PRIME=1, then I'm hopefully going to add a gnome-shell extension to make the same possible from a launcher or some sort.

    After that people can do what they like ;-)

    Dave.

    Leave a comment:


  • snadrus
    replied
    To my eye

    So youre using a low-power Intel card and you'll (your sofware's devs) have 2 options when they want greater rendering:
    #2. Enable the 2nd GPU as a renderslave and have both GPUs helping. Knowing our program will end & we will want to go back to the lower-power GPU.
    #3. Use a much-riskier GPU-switch, lose the additional render-power of the slower GPU (vs #2), then risky-switch back when done.

    What's the benefit of #3 outside:
    - to un-hotplug (remove) the "integrated" card (unlikely)
    - power-savings while running intensive rendering software

    Why spend valuable XOrg dev time on this?
    Last edited by snadrus; 08 June 2012, 12:06 PM. Reason: clarity

    Leave a comment:


  • 89c51
    replied
    Originally posted by agd5f View Post
    It's specific to X. The underlying technologies that enable it (kms, dma_buf, prime) are not tied to a specific windowing system, but all windowing systems that want to implement this will need something like the X work that Dave is currently working on. It's highly windowing system specific.
    Thanks

    I remember long time ago that there was discussion in waylands mailing list -or someone mentioned talks with the GPU devs i don't remember exactly- about supporting stuff like multi monitor, multi GPU, switching and other funky stuff but i haven't heard anything since. Hence the question.

    Leave a comment:

Working...
X