Announcement

Collapse
No announcement yet.

DRI2 Offload Slaves, Output Slaves For September

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • DRI2 Offload Slaves, Output Slaves For September

    Phoronix: DRI2 Offload Slaves, Output Slaves For September

    Keith Packard has set out his plans for releasing X.Org Server 1.13 this September. There's also some interesting plans expressed by David Airlie for what this update could potentially provide for Linux desktop users...

    http://www.phoronix.com/vr.php?view=MTExNTg

  • #2
    This "second GPU" stuff sounds interesting.

    The main use-case for this "second GPU" is presumably laptop devices with dual graphics chips, but could it also help desktops with two graphics cards? I have a laptop and docking station, and the docking station has a PCIe slot containing a HD4650 card. I've been struggling for a while to get the HD4650 working in this configuration. (It certainly didn't help that the HD4650 needed 'BusID "12:0:0"' in its xorg.conf Device section instead of 'BusID "0c:0:0"', as per lspci output!). The HD4650 is currently connected to my TV, with the laptop's internal graphics chip bound to an external monitor.

    I had most success binding both of these screens together with Xinerama, but this was incompatible with Xrandr and so was hated by Gnome3. However, binding the HD4650 as a "DRI2 offload slave" instead sounds intriguing...

    Comment


    • #3
      As I understand it, this is a great step forward, and getting these new features in a near future is very good news.
      But this is still no clear explanation about how these changes will affect end users experience.
      More precisely, about Optimus support with the nouveau driver : if it is possible (with GPU offloading in xserver 1.13) how do I ask my computer : "run this piece of software on the discrete GPU" ?

      I currently use Bumblebee (and I took a close look at the it works with VirtualGL and proprietary nvidia drivers) and I look forward to the day I will get the same level of functionnality in a non hackish style, with a working installation of xserver and mesa, and open source drivers.
      For now, even the Bumblebee team (I don't know if they have new informations about this issue) doesn't seem to have any idea how the xserver changes will help the end user in the case of optimus laptops.

      Comment


      • #4
        Originally posted by chrisr View Post
        The main use-case for this "second GPU" is presumably laptop devices with dual graphics chips, but could it also help desktops with two graphics cards? I have a laptop and docking station, and the docking station has a PCIe slot containing a HD4650 card. I've been struggling for a while to get the HD4650 working in this configuration. (It certainly didn't help that the HD4650 needed 'BusID "12:0:0"' in its xorg.conf Device section instead of 'BusID "0c:0:0"', as per lspci output!). The HD4650 is currently connected to my TV, with the laptop's internal graphics chip bound to an external monitor.

        I had most success binding both of these screens together with Xinerama, but this was incompatible with Xrandr and so was hated by Gnome3. However, binding the HD4650 as a "DRI2 offload slave" instead sounds intriguing...
        Several things:

        1. These apply to multiple graphics cards regardless of whether it's a laptop or a desktop. Laptops are just the most common usage scenario.
        2. lspci uses hex, xorg conf uses decimal
        3. xinerama doesn't work with direct rendering or xrandr, so no 3D with xinerama. gnome 3 makes heavy use of 3D so it won't work well with xinerama. You should be able to use each card independently with it's own X session including 3D if you disable xinerama, but then you can't drag windows between heads.

        Comment


        • #5
          Originally posted by Nepenthes View Post
          As I understand it, this is a great step forward, and getting these new features in a near future is very good news.
          But this is still no clear explanation about how these changes will affect end users experience.
          More precisely, about Optimus support with the nouveau driver : if it is possible (with GPU offloading in xserver 1.13) how do I ask my computer : "run this piece of software on the discrete GPU" ?
          Yeah, that's kind of the point of this work. It decouples rendering and display in order to support things like optimus.

          Comment


          • #6
            noob question.

            this mutiGPU rendering and switching work by dave is window system independent right??

            Comment


            • #7
              Originally posted by 89c51 View Post
              noob question.

              this mutiGPU rendering and switching work by dave is window system independent right??
              It's specific to X. The underlying technologies that enable it (kms, dma_buf, prime) are not tied to a specific windowing system, but all windowing systems that want to implement this will need something like the X work that Dave is currently working on. It's highly windowing system specific.

              Comment


              • #8
                Originally posted by agd5f View Post
                It's specific to X. The underlying technologies that enable it (kms, dma_buf, prime) are not tied to a specific windowing system, but all windowing systems that want to implement this will need something like the X work that Dave is currently working on. It's highly windowing system specific.
                Thanks

                I remember long time ago that there was discussion in waylands mailing list -or someone mentioned talks with the GPU devs i don't remember exactly- about supporting stuff like multi monitor, multi GPU, switching and other funky stuff but i haven't heard anything since. Hence the question.

                Comment


                • #9
                  To my eye

                  So youre using a low-power Intel card and you'll (your sofware's devs) have 2 options when they want greater rendering:
                  #2. Enable the 2nd GPU as a renderslave and have both GPUs helping. Knowing our program will end & we will want to go back to the lower-power GPU.
                  #3. Use a much-riskier GPU-switch, lose the additional render-power of the slower GPU (vs #2), then risky-switch back when done.

                  What's the benefit of #3 outside:
                  - to un-hotplug (remove) the "integrated" card (unlikely)
                  - power-savings while running intensive rendering software

                  Why spend valuable XOrg dev time on this?
                  Last edited by snadrus; 06-08-2012, 12:06 PM. Reason: clarity

                  Comment


                  • #10
                    Originally posted by Nepenthes View Post
                    As I understand it, this is a great step forward, and getting these new features in a near future is very good news.
                    But this is still no clear explanation about how these changes will affect end users experience.
                    More precisely, about Optimus support with the nouveau driver : if it is possible (with GPU offloading in xserver 1.13) how do I ask my computer : "run this piece of software on the discrete GPU" ?

                    I currently use Bumblebee (and I took a close look at the it works with VirtualGL and proprietary nvidia drivers) and I look forward to the day I will get the same level of functionnality in a non hackish style, with a working installation of xserver and mesa, and open source drivers.
                    For now, even the Bumblebee team (I don't know if they have new informations about this issue) doesn't seem to have any idea how the xserver changes will help the end user in the case of optimus laptops.
                    the first implementation will just involve setting an env var, like DRI_PRIME=1, then I'm hopefully going to add a gnome-shell extension to make the same possible from a launcher or some sort.

                    After that people can do what they like ;-)

                    Dave.

                    Comment


                    • #11
                      Originally posted by snadrus View Post
                      So youre using a low-power Intel card and you'll (your sofware's devs) have 2 options when they want greater rendering:
                      #2. Enable the 2nd GPU as a renderslave and have both GPUs helping. Knowing our program will end & we will want to go back to the lower-power GPU.
                      #3. Use a much-riskier GPU-switch, lose the additional render-power of the slower GPU (vs #2), then risky-switch back when done.

                      What's the benefit of #3 outside:
                      - to un-hotplug (remove) the "integrated" card (unlikely)
                      - power-savings while running intensive rendering software

                      Why spend valuable XOrg dev time on this?
                      The switching case is necessary for some older MUXed systems and also,
                      .
                      Lots of laptops have outputs connected to the nvidia on docking stations, in order to use those outputs you have to really switch to running the nvidia card fully.
                      (in some situations this also involves copying the panel frontbuffer back to the intel to display). Anything else would just suck. Eventually it might be possible with xinerama improvements to avoid this situation but its non-trivial.

                      Also valuable Xorg dev time usage is decided by the developers and their employers, and both myself and my employer would quite like to have this project finished already!

                      Comment


                      • #12
                        Originally posted by airlied View Post
                        the first implementation will just involve setting an env var, like DRI_PRIME=1, then I'm hopefully going to add a gnome-shell extension to make the same possible from a launcher or some sort.

                        After that people can do what they like ;-)

                        Dave.
                        Thanks for the straight answer !

                        Comment


                        • #13
                          Originally posted by airlied View Post
                          the first implementation will just involve setting an env var, like DRI_PRIME=1, then I'm hopefully going to add a gnome-shell extension to make the same possible from a launcher or some sort.

                          After that people can do what they like ;-)

                          Dave.
                          I'm sold. The user side of things is going to be far more awesome than I was expecting.

                          I guess all that's left to do is have devs pester Nvidia to resubmit their EXPORT_SYMBOL patch. I need working VDPAU.
                          Last edited by LLStarks; 06-08-2012, 09:38 PM.

                          Comment


                          • #14
                            Will this work for systems that don't have "Optimus", like Ivy Bridge desktops that also happen to have a discrete GPU?

                            Also, I guess this doesn't even scratch the surface of the LucidLogix Virtu MVP hardware... would that hardware support provide any advantage if it were enabled? From what I understand, the only thing it would provide that we don't already have after this work is merged is, you could run a single application being rendered from two cards simultaneously. So if you have a weak CPU and a weak discrete GPU, you could combine them together and get more FPS out of your app than either of the constituent GPUs could do alone. It sounds kind of nice, but in my case I just use my ridiculously fast HD7970 whenever I need oomph -- and the Intel 3770K when I don't.

                            Oh, and the LucidLogix would also allow different applications to use different hardware within the same "X server" (on Windows it's on the same desktop). So you could create 4 hardware-accelerated windows on your desktop and have the top-left one only render on the IGP, the top-right only render on the GPU, the bottom-left render on both, etc. Can we do this without the LucidLogix hardware support? Or would we need that hardware?

                            Comment


                            • #15
                              Originally posted by allquixotic View Post
                              Will this work for systems that don't have "Optimus", like Ivy Bridge desktops that also happen to have a discrete GPU?
                              Hmm, is it possible to disable a discrete PCIe card at runtime? That'd be interesting.

                              Otherwise, there's another scenario I was thinking of - in case the discrete card can only handle two displays, could one use the integrated GPU as an output slave for a third display?

                              Comment

                              Working...
                              X