Announcement

Collapse
No announcement yet.

Intel Driver Gains Virtual/Remote Output Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel Driver Gains Virtual/Remote Output Support

    Phoronix: Intel Driver Gains Virtual/Remote Output Support

    The Intel X.Org driver has gained virtual output support to extend the local desktop with remote outputs. Simply put, this can help NVIDIA Optimus/Bumblebee users on Linux...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Wait.... i thought the intel gpu could already accept video from another local gpu, which was what PRIME was for. How does this differ.

    Comment


    • #3
      I'm not sure I'm actually getting what this commit is about.
      I've been familiar with Bumblebee, VirtualGL and Primus for quite a while, and now I'm using nvidia-prime, so I understand the difference between what Bubmblebee does and what PRIME does (the zero copy performance advantage...).
      But here I don't get it... What does it do exactly ? Can it be set up as a transport option for Bumblebee (with no other, or just a little, piece of code needed) ? Is there any performance gain compared to Primus ? Does it allow switching off nvidia-card when you're not using it ?

      Comment


      • #4
        Originally posted by Nepenthes View Post
        I'm not sure I'm actually getting what this commit is about.
        I've been familiar with Bumblebee, VirtualGL and Primus for quite a while, and now I'm using nvidia-prime, so I understand the difference between what Bubmblebee does and what PRIME does (the zero copy performance advantage...).
        But here I don't get it... What does it do exactly ? Can it be set up as a transport option for Bumblebee (with no other, or just a little, piece of code needed) ? Is there any performance gain compared to Primus ? Does it allow switching off nvidia-card when you're not using it ?
        None at all. It doesn't claim to do anything new or in a better way. Somebody asked me why their Bumblebee integration code didn't work and showed me the hack they were using. Since as it turned out, I had very similar code in the driver for another purpose, I made that piece of code replace their hack.

        Yes, this is only for those people who choose not to use PRIME. The kernel level integration with PRIME is the right approach from the performance, power and usability standpoint. And the VirtualHeads can be made driver independent by building that functionality into the Xserver (along with the external transport process) and using providers - i.e. accelerated Xvnc.

        Comment


        • #5
          Originally posted by ickle View Post
          None at all. It doesn't claim to do anything new or in a better way. Somebody asked me why their Bumblebee integration code didn't work and showed me the hack they were using. Since as it turned out, I had very similar code in the driver for another purpose, I made that piece of code replace their hack.

          Yes, this is only for those people who choose not to use PRIME. The kernel level integration with PRIME is the right approach from the performance, power and usability standpoint. And the VirtualHeads can be made driver independent by building that functionality into the Xserver (along with the external transport process) and using providers - i.e. accelerated Xvnc.
          So it allows TheBumblebeeProject-like projects to work native with intel driver if one doesn't want to use nvidia's prime or amd's solution? Am I getting that right?

          Comment


          • #6
            Originally posted by dh04000 View Post
            So it allows TheBumblebeeProject-like projects to work native with intel driver if one doesn't want to use nvidia's prime or amd's solution? Am I getting that right?
            Yes. It incorporates the existing code that people are currently using upstream. I expect that it will be replaced by real integration between the drivers, but since it added very little maintenance burden, and looks to be a useful tool, it looked acceptable to upstream.

            Comment


            • #7
              Hi,

              Thank's for your great job with the Intel Graphics driver! My Thinkpad T430 works very well with the intel open source driver!

              But I have a question about this new feature: Does this allow me to use the display-port connection in my notebook?
              Because it is hardwired to the nvidia chip and up until now there hasn't been a way to use it on linux (afaik) with the intel driver. I interpreted the news like this could finally be possible :-) but I'm now a little bit confused because you sad that it doesn't add features which did not exist before.

              Thank's in advance!
              nice regards
              Michael

              Comment


              • #8
                Originally posted by aelo View Post
                But I have a question about this new feature: Does this allow me to use the display-port connection in my notebook?
                Because it is hardwired to the nvidia chip and up until now there hasn't been a way to use it on linux (afaik) with the intel driver. I interpreted the news like this could finally be possible :-) but I'm now a little bit confused because you sad that it doesn't add features which did not exist before.
                What the commits to the Intel DDX do is simplify the Bumblee approach of using the binary nvidia driver to create a second X server to control the external GPU and displays, but present that as an extension to the first X server (using -intel). (Using a standard setup this should be as easy as startx & intel-virtual-output, which presumes that the X server finds both GPUs assigns :0.0 to -intel and :0.1 to -nvidia)

                The alternative approach is to use -nouveau and PRIME.

                Comment


                • #9
                  So, where do we start if we want to try this ? Assuming it lands soon in xorg-edgers, what tools do we need to get it running (bbswitch to wake up the discrete GPU, some tool to configure the virtualhead, to make sure the application openGL rendering is done by the nvidia card, then kill the virtualhead and bbswitch again to to switch off the discrete GPU) ?

                  Could this be mixed with the new nvidia RandR 1.4 support (can we get the nvidia card to render directly to the virtualhead, with the new zero copy option ?)
                  Last edited by Nepenthes; 02 September 2013, 11:17 AM.

                  Comment


                  • #10
                    Originally posted by Nepenthes View Post
                    So, where do we start if we want to try this ? Assuming it lands soon in xorg-edgers, what tools do we need to get it running (bbswitch to wake up the discrete GPU, some tool to configure the virtualhead, to make sure the application openGL rendering is done by the nvidia card, then kill the virtualhead and bbswitch again to to switch off the discrete GPU) ?
                    By the point where I was comfortable writing this update, the process for using the outputs on the nvidia card was:
                    0. Install the latest Intel drivers
                    1. apt-get install bumblebee-nvidia
                    2. Modify the bumblebee configuration to enable outputs on the discrete GPU

                    Code:
                    --- xorg.conf.nvidia.orig	2013-09-02 23:06:04.628948519 +0000
                    +++ xorg.conf.nvidia	2013-09-02 21:16:42.617008324 +0000
                    @@ -29,6 +29,6 @@
                         Option "ProbeAllGpus" "false"
                     
                         Option "NoLogo" "true"
                    -    Option "UseEDID" "false"
                    -    Option "UseDisplayDevice" "none"
                    +    #Option "UseEDID" "false"
                    +    #Option "UseDisplayDevice" "none"
                     EndSection
                    3. Run intel-virtual-overlay [which automatically detects bumblebee and requests an Xserver for the Nvidia GPU]

                    Could this be mixed with the new nvidia RandR 1.4 support (can we get the Nvidia card to render directly to the virtualhead, with the new zero copy option ?)
                    There is nothing stopping you from trying... In theory, PRIME should be able to negotiate zero-copy support just as well through the kernel (and hopefully control the synchronisation better and so be easier to use and higher performance and integrate better into power management etc etc).

                    Comment

                    Working...
                    X