Announcement

Collapse
No announcement yet.

PRIME DRI2 Offloading Pulled Into X.Org Server

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • PRIME DRI2 Offloading Pulled Into X.Org Server

    Phoronix: PRIME DRI2 Offloading Pulled Into X.Org Server

    The X.Org Server 1.13 release for this September just got a lot more exciting now that David Airlie's patch-set for providing initial PRIME DRI2 off-loading and other modern GPU functionality was merged this weekend...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    I don't think that's quite right

    patches he's been viciously working on
    voracious?

    Comment


    • #3
      KUDOS! We all love you

      However,
      I'm not completely sure of the immediate implications : I know that this means one GPU can render, while the other displays. I know this works wih displaylink multiseat devices. (as seen in the video). Does this also mean that dualGPU laptops (non-muxed) will work with X server 1.13? Maybe someone could sum it up, I got a bit lost among all the articles.

      Thanks.

      Comment


      • #4
        So will this ongoing GPU work eventually help Nvidia Optimus users?




        Gaming News - www.gamingonlinux.com

        Comment


        • #5
          Originally posted by Serafean View Post
          KUDOS! We all love you

          However,
          I'm not completely sure of the immediate implications : I know that this means one GPU can render, while the other displays. I know this works wih displaylink multiseat devices. (as seen in the video). Does this also mean that dualGPU laptops (non-muxed) will work with X server 1.13? Maybe someone could sum it up, I got a bit lost among all the articles.

          Thanks.
          I believe that the answer to this question is: Yes, dual-GPU laptops without a MUX should be usable with X server 1.13... probably with a bit of config work

          In this case, you'd be using a dedicated Nvidia/AMD/etc GPU to render, and then that GPU would forward the resulting buffer to the built-in Intel/whatever IGP for ?composition? and display. As long as the drivers for both video devices support sharing buffers in the correct manner, this can happen (I'm assuming that this probably requires GEM/TTM to work). I'm guessing that there's still user-configuration required to determine which cards to use when... A demo video from a while back hinted at environment variable changes to determine which card does the rendering work, but eventually this could probably be automated in some manner.

          Unforunately, or fortunately, I don't have any hardware of my own to test this on, although maybe it's possible to get this working using the HD4200 IGP in my 785G motherboard for display along with my Radeon 6850 acting as the renderer/offload target.

          Comment


          • #6
            I believe that the answer to this question is: Yes, dual-GPU laptops without a MUX should be usable with X server 1.13... probably with a bit of config work
            This is fantastic news!

            Unforunately, or fortunately, I don't have any hardware of my own to test this on, although maybe it's possible to get this working using the HD4200 IGP in my 785G motherboard for display along with my Radeon 6850 acting as the renderer/offload target.
            As I understand it, this framework is very generic... This means that the offloading can be done in any way you want, no? (meaning IGP renders, 6850 displays; or the other way around).

            This is what I love about linux/opensource : it usually takes longer, but when something is implemented, it feels as though no corners were cut (meaning it works in a very generic way, leaving a lot of options)

            Now I guess we need to wait for DEs to implement the new RandR protocol to make it user-friendly...

            Another dumb question: I have seen SLI/Crossfire mentioned along with this work, I don't really see how it could help there.

            Comment


            • #7
              What about Wayland?

              What about GPU switching, GPU offloading, multi-GPU, etc for Wayland?

              Comment


              • #8
                Originally posted by Serafean View Post
                Another dumb question: I have seen SLI/Crossfire mentioned along with this work, I don't really see how it could help there.
                Right now, not at all, but in the long run the buffer sharing part of this is one of several required steps to get SLI/Crossfire working.

                Originally posted by uid313 View Post
                What about GPU switching, GPU offloading, multi-GPU, etc for Wayland?
                All kernel, libdrm and mesa work is shared, and the rest is (comparably) trivial to do in Wayland too when you already got it working once (in xorg). You can't copy-paste code from xorg to wayland, but most architecture design work can be reused.
                Last edited by Jonno; 09 July 2012, 06:06 PM.

                Comment


                • #9
                  Originally posted by phoronix View Post
                  Phoronix: PRIME DRI2 Offloading Pulled Into X.Org Server

                  The X.Org Server 1.13 release for this September just got a lot more exciting now that David Airlie's patch-set for providing initial PRIME DRI2 off-loading and other modern GPU functionality was merged this weekend...

                  http://www.phoronix.com/vr.php?view=MTEzNjE
                  Does anyone understand how they manage the gpu hotplug? My understanding of PCIE is that it's architecture doesn't support those kinds of changes while running. Are all the devices actually activated on boot, but then unused ones put into the lowest power state until needed?

                  Comment


                  • #10
                    Originally posted by liam View Post
                    Does anyone understand how they manage the gpu hotplug? My understanding of PCIE is that it's architecture doesn't support those kinds of changes while running. Are all the devices actually activated on boot, but then unused ones put into the lowest power state until needed?
                    I don't know about PCIe hotplug support.. However, I'm farily sure the point of this hotplug support is to allow for USB graphics devices to be hotplugged.

                    Comment

                    Working...
                    X