Originally posted by oszillo
View Post
Announcement
Collapse
No announcement yet.
Intel Driver Gains Virtual/Remote Output Support
Collapse
X
-
-
Originally posted by ickle View PostBy the point where I was comfortable writing this update, the process for using the outputs on the nvidia card was:
0. Install the latest Intel drivers
1. apt-get install bumblebee-nvidia
2. Modify the bumblebee configuration to enable outputs on the discrete GPU
[...]
3. Run intel-virtual-overlay [which automatically detects bumblebee and requests an Xserver for the Nvidia GPU]
As Gentoo already has an updated ebuild for the new version, I gave it a try today following the steps outlined above. Setting it up works perfectly, including activating the screen connected to the NVIDIA GPU. The external display turns on and shows the background image of my desktop. Moving the mouse to the second screen also works, but unfortunately that's it. Whatever window I drag over to the external screen is not shown. Neither do any popup menus of my desktop environment work on the second display. No idea what's going on there.
Leave a comment:
-
Originally posted by Nepenthes View PostSo, where do we start if we want to try this ? Assuming it lands soon in xorg-edgers, what tools do we need to get it running (bbswitch to wake up the discrete GPU, some tool to configure the virtualhead, to make sure the application openGL rendering is done by the nvidia card, then kill the virtualhead and bbswitch again to to switch off the discrete GPU) ?
0. Install the latest Intel drivers
1. apt-get install bumblebee-nvidia
2. Modify the bumblebee configuration to enable outputs on the discrete GPU
Code:--- xorg.conf.nvidia.orig 2013-09-02 23:06:04.628948519 +0000 +++ xorg.conf.nvidia 2013-09-02 21:16:42.617008324 +0000 @@ -29,6 +29,6 @@ Option "ProbeAllGpus" "false" Option "NoLogo" "true" - Option "UseEDID" "false" - Option "UseDisplayDevice" "none" + #Option "UseEDID" "false" + #Option "UseDisplayDevice" "none" EndSection
Could this be mixed with the new nvidia RandR 1.4 support (can we get the Nvidia card to render directly to the virtualhead, with the new zero copy option ?)
Leave a comment:
-
So, where do we start if we want to try this ? Assuming it lands soon in xorg-edgers, what tools do we need to get it running (bbswitch to wake up the discrete GPU, some tool to configure the virtualhead, to make sure the application openGL rendering is done by the nvidia card, then kill the virtualhead and bbswitch again to to switch off the discrete GPU) ?
Could this be mixed with the new nvidia RandR 1.4 support (can we get the nvidia card to render directly to the virtualhead, with the new zero copy option ?)Last edited by Nepenthes; 02 September 2013, 11:17 AM.
Leave a comment:
-
Originally posted by aelo View PostBut I have a question about this new feature: Does this allow me to use the display-port connection in my notebook?
Because it is hardwired to the nvidia chip and up until now there hasn't been a way to use it on linux (afaik) with the intel driver. I interpreted the news like this could finally be possible :-) but I'm now a little bit confused because you sad that it doesn't add features which did not exist before.
The alternative approach is to use -nouveau and PRIME.
Leave a comment:
-
Hi,
Thank's for your great job with the Intel Graphics driver! My Thinkpad T430 works very well with the intel open source driver!
But I have a question about this new feature: Does this allow me to use the display-port connection in my notebook?
Because it is hardwired to the nvidia chip and up until now there hasn't been a way to use it on linux (afaik) with the intel driver. I interpreted the news like this could finally be possible :-) but I'm now a little bit confused because you sad that it doesn't add features which did not exist before.
Thank's in advance!
nice regards
Michael
Leave a comment:
-
Originally posted by dh04000 View PostSo it allows TheBumblebeeProject-like projects to work native with intel driver if one doesn't want to use nvidia's prime or amd's solution? Am I getting that right?
Leave a comment:
-
Originally posted by ickle View PostNone at all. It doesn't claim to do anything new or in a better way. Somebody asked me why their Bumblebee integration code didn't work and showed me the hack they were using. Since as it turned out, I had very similar code in the driver for another purpose, I made that piece of code replace their hack.
Yes, this is only for those people who choose not to use PRIME. The kernel level integration with PRIME is the right approach from the performance, power and usability standpoint. And the VirtualHeads can be made driver independent by building that functionality into the Xserver (along with the external transport process) and using providers - i.e. accelerated Xvnc.
Leave a comment:
-
Originally posted by Nepenthes View PostI'm not sure I'm actually getting what this commit is about.
I've been familiar with Bumblebee, VirtualGL and Primus for quite a while, and now I'm using nvidia-prime, so I understand the difference between what Bubmblebee does and what PRIME does (the zero copy performance advantage...).
But here I don't get it... What does it do exactly ? Can it be set up as a transport option for Bumblebee (with no other, or just a little, piece of code needed) ? Is there any performance gain compared to Primus ? Does it allow switching off nvidia-card when you're not using it ?
Yes, this is only for those people who choose not to use PRIME. The kernel level integration with PRIME is the right approach from the performance, power and usability standpoint. And the VirtualHeads can be made driver independent by building that functionality into the Xserver (along with the external transport process) and using providers - i.e. accelerated Xvnc.
Leave a comment:
-
I'm not sure I'm actually getting what this commit is about.
I've been familiar with Bumblebee, VirtualGL and Primus for quite a while, and now I'm using nvidia-prime, so I understand the difference between what Bubmblebee does and what PRIME does (the zero copy performance advantage...).
But here I don't get it... What does it do exactly ? Can it be set up as a transport option for Bumblebee (with no other, or just a little, piece of code needed) ? Is there any performance gain compared to Primus ? Does it allow switching off nvidia-card when you're not using it ?
Leave a comment:
Leave a comment: