Announcement

Collapse
No announcement yet.

DRI2 Offload Slaves, Output Slaves For September

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • DRI2 Offload Slaves, Output Slaves For September

    Phoronix: DRI2 Offload Slaves, Output Slaves For September

    Keith Packard has set out his plans for releasing X.Org Server 1.13 this September. There's also some interesting plans expressed by David Airlie for what this update could potentially provide for Linux desktop users...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    This "second GPU" stuff sounds interesting.

    The main use-case for this "second GPU" is presumably laptop devices with dual graphics chips, but could it also help desktops with two graphics cards? I have a laptop and docking station, and the docking station has a PCIe slot containing a HD4650 card. I've been struggling for a while to get the HD4650 working in this configuration. (It certainly didn't help that the HD4650 needed 'BusID "12:0:0"' in its xorg.conf Device section instead of 'BusID "0c:0:0"', as per lspci output!). The HD4650 is currently connected to my TV, with the laptop's internal graphics chip bound to an external monitor.

    I had most success binding both of these screens together with Xinerama, but this was incompatible with Xrandr and so was hated by Gnome3. However, binding the HD4650 as a "DRI2 offload slave" instead sounds intriguing...

    Comment


    • #3
      As I understand it, this is a great step forward, and getting these new features in a near future is very good news.
      But this is still no clear explanation about how these changes will affect end users experience.
      More precisely, about Optimus support with the nouveau driver : if it is possible (with GPU offloading in xserver 1.13) how do I ask my computer : "run this piece of software on the discrete GPU" ?

      I currently use Bumblebee (and I took a close look at the it works with VirtualGL and proprietary nvidia drivers) and I look forward to the day I will get the same level of functionnality in a non hackish style, with a working installation of xserver and mesa, and open source drivers.
      For now, even the Bumblebee team (I don't know if they have new informations about this issue) doesn't seem to have any idea how the xserver changes will help the end user in the case of optimus laptops.

      Comment


      • #4
        Originally posted by chrisr View Post
        The main use-case for this "second GPU" is presumably laptop devices with dual graphics chips, but could it also help desktops with two graphics cards? I have a laptop and docking station, and the docking station has a PCIe slot containing a HD4650 card. I've been struggling for a while to get the HD4650 working in this configuration. (It certainly didn't help that the HD4650 needed 'BusID "12:0:0"' in its xorg.conf Device section instead of 'BusID "0c:0:0"', as per lspci output!). The HD4650 is currently connected to my TV, with the laptop's internal graphics chip bound to an external monitor.

        I had most success binding both of these screens together with Xinerama, but this was incompatible with Xrandr and so was hated by Gnome3. However, binding the HD4650 as a "DRI2 offload slave" instead sounds intriguing...
        Several things:

        1. These apply to multiple graphics cards regardless of whether it's a laptop or a desktop. Laptops are just the most common usage scenario.
        2. lspci uses hex, xorg conf uses decimal
        3. xinerama doesn't work with direct rendering or xrandr, so no 3D with xinerama. gnome 3 makes heavy use of 3D so it won't work well with xinerama. You should be able to use each card independently with it's own X session including 3D if you disable xinerama, but then you can't drag windows between heads.

        Comment


        • #5
          Originally posted by Nepenthes View Post
          As I understand it, this is a great step forward, and getting these new features in a near future is very good news.
          But this is still no clear explanation about how these changes will affect end users experience.
          More precisely, about Optimus support with the nouveau driver : if it is possible (with GPU offloading in xserver 1.13) how do I ask my computer : "run this piece of software on the discrete GPU" ?
          Yeah, that's kind of the point of this work. It decouples rendering and display in order to support things like optimus.

          Comment


          • #6
            noob question.

            this mutiGPU rendering and switching work by dave is window system independent right??

            Comment


            • #7
              Originally posted by 89c51 View Post
              noob question.

              this mutiGPU rendering and switching work by dave is window system independent right??
              It's specific to X. The underlying technologies that enable it (kms, dma_buf, prime) are not tied to a specific windowing system, but all windowing systems that want to implement this will need something like the X work that Dave is currently working on. It's highly windowing system specific.

              Comment


              • #8
                Originally posted by agd5f View Post
                It's specific to X. The underlying technologies that enable it (kms, dma_buf, prime) are not tied to a specific windowing system, but all windowing systems that want to implement this will need something like the X work that Dave is currently working on. It's highly windowing system specific.
                Thanks

                I remember long time ago that there was discussion in waylands mailing list -or someone mentioned talks with the GPU devs i don't remember exactly- about supporting stuff like multi monitor, multi GPU, switching and other funky stuff but i haven't heard anything since. Hence the question.

                Comment


                • #9
                  To my eye

                  So youre using a low-power Intel card and you'll (your sofware's devs) have 2 options when they want greater rendering:
                  #2. Enable the 2nd GPU as a renderslave and have both GPUs helping. Knowing our program will end & we will want to go back to the lower-power GPU.
                  #3. Use a much-riskier GPU-switch, lose the additional render-power of the slower GPU (vs #2), then risky-switch back when done.

                  What's the benefit of #3 outside:
                  - to un-hotplug (remove) the "integrated" card (unlikely)
                  - power-savings while running intensive rendering software

                  Why spend valuable XOrg dev time on this?
                  Last edited by snadrus; 08 June 2012, 12:06 PM. Reason: clarity

                  Comment


                  • #10
                    Originally posted by Nepenthes View Post
                    As I understand it, this is a great step forward, and getting these new features in a near future is very good news.
                    But this is still no clear explanation about how these changes will affect end users experience.
                    More precisely, about Optimus support with the nouveau driver : if it is possible (with GPU offloading in xserver 1.13) how do I ask my computer : "run this piece of software on the discrete GPU" ?

                    I currently use Bumblebee (and I took a close look at the it works with VirtualGL and proprietary nvidia drivers) and I look forward to the day I will get the same level of functionnality in a non hackish style, with a working installation of xserver and mesa, and open source drivers.
                    For now, even the Bumblebee team (I don't know if they have new informations about this issue) doesn't seem to have any idea how the xserver changes will help the end user in the case of optimus laptops.
                    the first implementation will just involve setting an env var, like DRI_PRIME=1, then I'm hopefully going to add a gnome-shell extension to make the same possible from a launcher or some sort.

                    After that people can do what they like ;-)

                    Dave.

                    Comment

                    Working...
                    X