Announcement

Collapse
No announcement yet.

DRI2 Offload Slaves, Output Slaves For September

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by snadrus View Post
    So youre using a low-power Intel card and you'll (your sofware's devs) have 2 options when they want greater rendering:
    #2. Enable the 2nd GPU as a renderslave and have both GPUs helping. Knowing our program will end & we will want to go back to the lower-power GPU.
    #3. Use a much-riskier GPU-switch, lose the additional render-power of the slower GPU (vs #2), then risky-switch back when done.

    What's the benefit of #3 outside:
    - to un-hotplug (remove) the "integrated" card (unlikely)
    - power-savings while running intensive rendering software

    Why spend valuable XOrg dev time on this?
    The switching case is necessary for some older MUXed systems and also,
    .
    Lots of laptops have outputs connected to the nvidia on docking stations, in order to use those outputs you have to really switch to running the nvidia card fully.
    (in some situations this also involves copying the panel frontbuffer back to the intel to display). Anything else would just suck. Eventually it might be possible with xinerama improvements to avoid this situation but its non-trivial.

    Also valuable Xorg dev time usage is decided by the developers and their employers, and both myself and my employer would quite like to have this project finished already!

    Comment


    • #12
      Originally posted by airlied View Post
      the first implementation will just involve setting an env var, like DRI_PRIME=1, then I'm hopefully going to add a gnome-shell extension to make the same possible from a launcher or some sort.

      After that people can do what they like ;-)

      Dave.
      Thanks for the straight answer !

      Comment


      • #13
        Originally posted by airlied View Post
        the first implementation will just involve setting an env var, like DRI_PRIME=1, then I'm hopefully going to add a gnome-shell extension to make the same possible from a launcher or some sort.

        After that people can do what they like ;-)

        Dave.
        I'm sold. The user side of things is going to be far more awesome than I was expecting.

        I guess all that's left to do is have devs pester Nvidia to resubmit their EXPORT_SYMBOL patch. I need working VDPAU.
        Last edited by LLStarks; 08 June 2012, 09:38 PM.

        Comment


        • #14
          Will this work for systems that don't have "Optimus", like Ivy Bridge desktops that also happen to have a discrete GPU?

          Also, I guess this doesn't even scratch the surface of the LucidLogix Virtu MVP hardware... would that hardware support provide any advantage if it were enabled? From what I understand, the only thing it would provide that we don't already have after this work is merged is, you could run a single application being rendered from two cards simultaneously. So if you have a weak CPU and a weak discrete GPU, you could combine them together and get more FPS out of your app than either of the constituent GPUs could do alone. It sounds kind of nice, but in my case I just use my ridiculously fast HD7970 whenever I need oomph -- and the Intel 3770K when I don't.

          Oh, and the LucidLogix would also allow different applications to use different hardware within the same "X server" (on Windows it's on the same desktop). So you could create 4 hardware-accelerated windows on your desktop and have the top-left one only render on the IGP, the top-right only render on the GPU, the bottom-left render on both, etc. Can we do this without the LucidLogix hardware support? Or would we need that hardware?

          Comment


          • #15
            Originally posted by allquixotic View Post
            Will this work for systems that don't have "Optimus", like Ivy Bridge desktops that also happen to have a discrete GPU?
            Hmm, is it possible to disable a discrete PCIe card at runtime? That'd be interesting.

            Otherwise, there's another scenario I was thinking of - in case the discrete card can only handle two displays, could one use the integrated GPU as an output slave for a third display?

            Comment


            • #16
              Originally posted by Gusar View Post
              Hmm, is it possible to disable a discrete PCIe card at runtime? That'd be interesting.

              Otherwise, there's another scenario I was thinking of - in case the discrete card can only handle two displays, could one use the integrated GPU as an output slave for a third display?
              The LucidLogix Virtu MVP hardware on the motherboard can support cases like that. You basically forget about which cards have which monitors plugged into them, and the operating system just makes all your displays come on seamlessly and they're all part of one big desktop; and you can still decide on a per-application basis, exactly which GPU you want to be rendering (the CPU, the discrete GPU, or both).

              I think for the "hybrid" mode rendering, where both CPU and GPU are involved, it intelligently picks one card (probably the one that's directly connected to the monitor it's outputting on) to do the simpler operations, but it distributes shader operations across the IGP and the GPU. This is probably because the shader pipeline is the easiest resource to distribute across GPUs as far as driver complexity and hardware complexity.
              Last edited by allquixotic; 09 June 2012, 10:20 AM.

              Comment


              • #17
                Originally posted by allquixotic View Post
                The LucidLogix Virtu MVP hardware on the motherboard can support cases like that.
                Cool.

                Though what I want to know is whether dma_buf prime will support this without LucidLogix Virtu. (not that I personally need this, my desktop has only one display, I'm just curious)

                Comment

                Working...
                X