Announcement

Collapse
No announcement yet.

Experimental Code Published For Virtual CRTCs

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Experimental Code Published For Virtual CRTCs

    Phoronix: Experimental Code Published For Virtual CRTCs

    If you're interested in multi-GPU rendering, capabilities for DisplayLink-like devices, or NVIDIA Optimus / MUX-less hybrid graphics switching, here's some news worth reading about virtual CRTCs...

    http://www.phoronix.com/vr.php?view=MTAxMDk

  • #2
    i don't think i understand the point of this - why render something on 1 gpu but send the video signals to another? that would still require both GPUs to be active at the same time, while the 2nd one is doing almost no work at all.

    Comment


    • #3
      I can think of several cases where this would be desired:

      1. When using DisplayLink devices: in this case I could add more monitors to my computer using DisplayLink devices (a usb display adapter without any kind of 3d acceleration) and the main GPU would handle 3D for all monitors, sending the rendered image to the usb device(s).

      2. For laptops with multiple GPUs: Some laptops have a weak "on board" GPU that it uses normally to save power, and another powerfull GPU to use when more advanced 3D is needed. At the momment this don't work on linux, this setup would allow, when needed, the powerfull GPU to render the 3Ds and send the output to the standard one.

      Comment


      • #4
        The point is to use the GPU to render the output to all monitors and send that to the other devices that don't have 3d acceleration (like DisplayLink usb devices).

        Comment


        • #5
          Originally posted by faustop View Post
          The point is to use the GPU to render the output to all monitors and send that to the other devices that don't have 3d acceleration (like DisplayLink usb devices).
          hmmmm.......... in that case, i believe this could actually be the solution to gpu-passthrough on virtual machines. here's how it would work:

          you would have to have at least 2 GPUs - one for your main display, the other to be sent to the vm using pci passthrough. currently in VMs like virtualbox, the gpu can be recognized but it can't be used for an active display. by letting that gpu do the rendering, the virtual gpu can display the rendered output.

          if for some reason that didn't work, there's another idea - maybe it is possible to send the render data through the vm to the host, so the host gpu can render and send the output to the virtual gpu.


          if anyone can confirm that this is possible i am very excited.

          Comment


          • #6
            Originally posted by schmidtbag View Post
            i don't think i understand the point of this - why render something on 1 gpu but send the video signals to another? that would still require both GPUs to be active at the same time, while the 2nd one is doing almost no work at all.
            Several reasons:
            1. To provide 3D acceleration for devices without 3D engines (e.g., USB video devices). The GPU with the 3D engine could render the desktop for both itself and the non-accelerated device allowing you to play games, use desktop effects, etc. even on no-accelerated displays.
            2. To support multiple GPUs in laptops to provide maximum performance. This is really just a variant of 1. On MUX-less laptops you have integrated GPU attached to the displays and a discrete GPU that is not attached to the displays. If you want maximum 3D performance, you render using the discrete GPU and display the image on the integrated one. If you want maximum power savings, you render with the integrated GPU and turn off the discrete one. The lack of the MUX saves money and complexity as you don't have to switch drivers depending on which one is driving the displays and it saves the OEM a few cents by not requiring the actual MUX hardware. On some hybrid laptops you even have different sets of displays attached to different connectors (e.g. laptop panel attached to the integrated GPU; HDMI and DP ports attached to the discrete GPU) on the assumption that you'd want maximum performance when plugged into an external monitor and maximum power savings when using the laptop panel.

            Comment


            • #7
              It'd also be useful to create X servers with virtual CRTCs and expose them via a VNC or RDP server. Great for headless machines. Getting them to start X and create a proper framebuffer without any monitors attached is somewhat painful.

              Now if someone added multiseat support to run multiple X servers on a single GPU, you could have multiple accelerated X servers on a single GPU running at the same time. Some of these could run some virtual machine. That might be pretty useful.

              Comment


              • #8
                Does this open up the possibility of elegantly enabling multi-gpu scaling in foss drivers (AKA, SLI)?

                Comment


                • #9
                  What I'd like to know is if this could let me use my dual graphics cards for multiple displays in KDE the same way I can in Windows including dragging windows between them. With 3D acceleration on at least one monitor.

                  Comment


                  • #10
                    Originally posted by PreferLinux View Post
                    What I'd like to know is if this could let me use my dual graphics cards for multiple displays in KDE the same way I can in Windows including dragging windows between them. With 3D acceleration on at least one monitor.
                    You already can, it's called Xinerama. Last I checked there were some problems when combining Xinerama with compositing, but if you turn compositing off, you'll have 3d support on both monitors, at the expense of some performance issues for windows spanning multiple displays.

                    Of course you could use CRTCs to render everything on one GPU, then forward the framebuffer to the secondard GPU for display. But then what's the point of the second GPU? Are you running out of connectors on your primary GPU?

                    Comment

                    Working...
                    X