Announcement

Collapse
No announcement yet.

Intel Driver Gains Virtual/Remote Output Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • ickle
    replied
    Originally posted by oszillo View Post
    Integrating the support for virtual outputs into the Intel drivers is an enormous step forward in making multi-GPU setups usable and maintainable. Before I played around with the "virtual CRTC" patch, which worked to some degree. At least until one decided to upgrade some component... So I'm really happy about having this feature readily available in the Intel driver!
    As Gentoo already has an updated ebuild for the new version, I gave it a try today following the steps outlined above. Setting it up works perfectly, including activating the screen connected to the NVIDIA GPU. The external display turns on and shows the background image of my desktop. Moving the mouse to the second screen also works, but unfortunately that's it. Whatever window I drag over to the external screen is not shown. Neither do any popup menus of my desktop environment work on the second display. No idea what's going on there.
    Oh, that's weird. You get the background image and mouse, but nothing more. Can you please open a bug on bugs.fd.o so that I do not loose track of this whilst I think what the problem is likely to be.

    Leave a comment:


  • oszillo
    replied
    Originally posted by ickle View Post
    By the point where I was comfortable writing this update, the process for using the outputs on the nvidia card was:
    0. Install the latest Intel drivers
    1. apt-get install bumblebee-nvidia
    2. Modify the bumblebee configuration to enable outputs on the discrete GPU
    [...]
    3. Run intel-virtual-overlay [which automatically detects bumblebee and requests an Xserver for the Nvidia GPU]
    Integrating the support for virtual outputs into the Intel drivers is an enormous step forward in making multi-GPU setups usable and maintainable. Before I played around with the "virtual CRTC" patch, which worked to some degree. At least until one decided to upgrade some component... So I'm really happy about having this feature readily available in the Intel driver!
    As Gentoo already has an updated ebuild for the new version, I gave it a try today following the steps outlined above. Setting it up works perfectly, including activating the screen connected to the NVIDIA GPU. The external display turns on and shows the background image of my desktop. Moving the mouse to the second screen also works, but unfortunately that's it. Whatever window I drag over to the external screen is not shown. Neither do any popup menus of my desktop environment work on the second display. No idea what's going on there.

    Leave a comment:


  • ickle
    replied
    Originally posted by Nepenthes View Post
    So, where do we start if we want to try this ? Assuming it lands soon in xorg-edgers, what tools do we need to get it running (bbswitch to wake up the discrete GPU, some tool to configure the virtualhead, to make sure the application openGL rendering is done by the nvidia card, then kill the virtualhead and bbswitch again to to switch off the discrete GPU) ?
    By the point where I was comfortable writing this update, the process for using the outputs on the nvidia card was:
    0. Install the latest Intel drivers
    1. apt-get install bumblebee-nvidia
    2. Modify the bumblebee configuration to enable outputs on the discrete GPU

    Code:
    --- xorg.conf.nvidia.orig	2013-09-02 23:06:04.628948519 +0000
    +++ xorg.conf.nvidia	2013-09-02 21:16:42.617008324 +0000
    @@ -29,6 +29,6 @@
         Option "ProbeAllGpus" "false"
     
         Option "NoLogo" "true"
    -    Option "UseEDID" "false"
    -    Option "UseDisplayDevice" "none"
    +    #Option "UseEDID" "false"
    +    #Option "UseDisplayDevice" "none"
     EndSection
    3. Run intel-virtual-overlay [which automatically detects bumblebee and requests an Xserver for the Nvidia GPU]

    Could this be mixed with the new nvidia RandR 1.4 support (can we get the Nvidia card to render directly to the virtualhead, with the new zero copy option ?)
    There is nothing stopping you from trying... In theory, PRIME should be able to negotiate zero-copy support just as well through the kernel (and hopefully control the synchronisation better and so be easier to use and higher performance and integrate better into power management etc etc).

    Leave a comment:


  • Nepenthes
    replied
    So, where do we start if we want to try this ? Assuming it lands soon in xorg-edgers, what tools do we need to get it running (bbswitch to wake up the discrete GPU, some tool to configure the virtualhead, to make sure the application openGL rendering is done by the nvidia card, then kill the virtualhead and bbswitch again to to switch off the discrete GPU) ?

    Could this be mixed with the new nvidia RandR 1.4 support (can we get the nvidia card to render directly to the virtualhead, with the new zero copy option ?)
    Last edited by Nepenthes; 02 September 2013, 11:17 AM.

    Leave a comment:


  • ickle
    replied
    Originally posted by aelo View Post
    But I have a question about this new feature: Does this allow me to use the display-port connection in my notebook?
    Because it is hardwired to the nvidia chip and up until now there hasn't been a way to use it on linux (afaik) with the intel driver. I interpreted the news like this could finally be possible :-) but I'm now a little bit confused because you sad that it doesn't add features which did not exist before.
    What the commits to the Intel DDX do is simplify the Bumblee approach of using the binary nvidia driver to create a second X server to control the external GPU and displays, but present that as an extension to the first X server (using -intel). (Using a standard setup this should be as easy as startx & intel-virtual-output, which presumes that the X server finds both GPUs assigns :0.0 to -intel and :0.1 to -nvidia)

    The alternative approach is to use -nouveau and PRIME.

    Leave a comment:


  • aelo
    replied
    Hi,

    Thank's for your great job with the Intel Graphics driver! My Thinkpad T430 works very well with the intel open source driver!

    But I have a question about this new feature: Does this allow me to use the display-port connection in my notebook?
    Because it is hardwired to the nvidia chip and up until now there hasn't been a way to use it on linux (afaik) with the intel driver. I interpreted the news like this could finally be possible :-) but I'm now a little bit confused because you sad that it doesn't add features which did not exist before.

    Thank's in advance!
    nice regards
    Michael

    Leave a comment:


  • ickle
    replied
    Originally posted by dh04000 View Post
    So it allows TheBumblebeeProject-like projects to work native with intel driver if one doesn't want to use nvidia's prime or amd's solution? Am I getting that right?
    Yes. It incorporates the existing code that people are currently using upstream. I expect that it will be replaced by real integration between the drivers, but since it added very little maintenance burden, and looks to be a useful tool, it looked acceptable to upstream.

    Leave a comment:


  • dh04000
    replied
    Originally posted by ickle View Post
    None at all. It doesn't claim to do anything new or in a better way. Somebody asked me why their Bumblebee integration code didn't work and showed me the hack they were using. Since as it turned out, I had very similar code in the driver for another purpose, I made that piece of code replace their hack.

    Yes, this is only for those people who choose not to use PRIME. The kernel level integration with PRIME is the right approach from the performance, power and usability standpoint. And the VirtualHeads can be made driver independent by building that functionality into the Xserver (along with the external transport process) and using providers - i.e. accelerated Xvnc.
    So it allows TheBumblebeeProject-like projects to work native with intel driver if one doesn't want to use nvidia's prime or amd's solution? Am I getting that right?

    Leave a comment:


  • ickle
    replied
    Originally posted by Nepenthes View Post
    I'm not sure I'm actually getting what this commit is about.
    I've been familiar with Bumblebee, VirtualGL and Primus for quite a while, and now I'm using nvidia-prime, so I understand the difference between what Bubmblebee does and what PRIME does (the zero copy performance advantage...).
    But here I don't get it... What does it do exactly ? Can it be set up as a transport option for Bumblebee (with no other, or just a little, piece of code needed) ? Is there any performance gain compared to Primus ? Does it allow switching off nvidia-card when you're not using it ?
    None at all. It doesn't claim to do anything new or in a better way. Somebody asked me why their Bumblebee integration code didn't work and showed me the hack they were using. Since as it turned out, I had very similar code in the driver for another purpose, I made that piece of code replace their hack.

    Yes, this is only for those people who choose not to use PRIME. The kernel level integration with PRIME is the right approach from the performance, power and usability standpoint. And the VirtualHeads can be made driver independent by building that functionality into the Xserver (along with the external transport process) and using providers - i.e. accelerated Xvnc.

    Leave a comment:


  • Nepenthes
    replied
    I'm not sure I'm actually getting what this commit is about.
    I've been familiar with Bumblebee, VirtualGL and Primus for quite a while, and now I'm using nvidia-prime, so I understand the difference between what Bubmblebee does and what PRIME does (the zero copy performance advantage...).
    But here I don't get it... What does it do exactly ? Can it be set up as a transport option for Bumblebee (with no other, or just a little, piece of code needed) ? Is there any performance gain compared to Primus ? Does it allow switching off nvidia-card when you're not using it ?

    Leave a comment:

Working...
X