Announcement
Collapse
No announcement yet.
NVIDIA 361.16 Beta Driver Now Includes Long-Awaited GLVND
Collapse
X
-
Originally posted by bug77 View Post
And the relation of what you just said to Wayland is?
Originally posted by pinguinpc View Post
Anyone know how long this took to release? I am pretty sure it was presented at least 2 years ago, with radio silence (at least for those not involved) ever since then.
- Likes 1
Comment
-
Originally posted by GreatEmerald View Post
The main difference is that libglvnd allows both drivers to coexist. That means you don't have to blacklist and overwrite one to use the other. It's much cleaner, and you can switch between the two on the fly (in fact both can run at the same time on different screens if you want).
Those features Nvidia's approach allows are really a step up in flexibility and options compared to the Debian approach.Last edited by plonoma; 06 January 2016, 06:19 PM.
Comment
-
Originally posted by atomsymbol View Post
GLVND is definitely a step in the right direction.
It will most likely allow rendering OpenGL content (games, ...) on nvidia GPU and display the rendered image on a display connected to an AMD APU or to an Intel CPU.
But I would like to see a system offering similar functionality without any library such as libglvnd.
Linux can use some improvement in how it deals with API's and hardware implementation.
For software using API's such as OpenGL, which might have multiple backends.
The best thing to do is to implement inversion of control for resolving dependencies.
Have some dependency management for software, API's and hardware/software implementations offering API's and software (user applications) using the API's and it's implementations.
Comment
-
Originally posted by plonoma View Post
Sounds like Nvidia's approach is much better than Debian approach.
Those features Nvidia's approach allows are really a step up in flexibility and options compared to the Debian approach.
Debian's approach lets you keep several versions of gcc, python, java and whatnot installed and easily (in theory, at least) switch between them.
Comment
-
Did they do something to nvidia-smi?
Code:$ nvidia-smi -a ==============NVSMI LOG============== Timestamp : Sat Jan 9 22:07:55 2016 Driver Version : 361.16 Attached GPUs : 1 GPU 0000:01:00.0 Product Name : GeForce GTX 750 Ti Product Brand : GeForce Display Mode : Enabled Display Active : Enabled Persistence Mode : Disabled Accounting Mode : Disabled Accounting Mode Buffer Size : 1920 Driver Model Current : N/A Pending : N/A Serial Number : N/A GPU UUID : GPU-166d5f95-13dc-1d24-87e1-cb59a50b3832 Minor Number : 0 VBIOS Version : 82.07.25.00.13 MultiGPU Board : No Board ID : 0x100 GPU Part Number : N/A Inforom Version Image Version : N/A OEM Object : N/A ECC Object : N/A Power Management Object : N/A GPU Operation Mode Current : N/A Pending : N/A PCI Bus : 0x01 Device : 0x00 Domain : 0x0000 Device Id : 0x138010DE Bus Id : 0000:01:00.0 Sub System Id : 0x84A61043 GPU Link Info PCIe Generation Max : 3 Current : 1 Link Width Max : 16x Current : 16x Bridge Chip Type : N/A Firmware : N/A Replays since reset : 0 Tx Throughput : 12000 KB/s Rx Throughput : 166000 KB/s Fan Speed : 22 % Performance State : P8 Clocks Throttle Reasons Idle : Active Applications Clocks Setting : Not Active SW Power Cap : Not Active HW Slowdown : Not Active Unknown : Not Active FB Memory Usage Total : 2047 MiB Used : 227 MiB Free : 1820 MiB BAR1 Memory Usage Total : 256 MiB Used : 3 MiB Free : 253 MiB Compute Mode : Default Utilization Gpu : 1 % Memory : 4 % Encoder : 0 % Decoder : 0 % Ecc Mode Current : N/A Pending : N/A ECC Errors Volatile Single Bit Device Memory : N/A Register File : N/A L1 Cache : N/A L2 Cache : N/A Texture Memory : N/A Total : N/A Double Bit Device Memory : N/A Register File : N/A L1 Cache : N/A L2 Cache : N/A Texture Memory : N/A Total : N/A Aggregate Single Bit Device Memory : N/A Register File : N/A L1 Cache : N/A L2 Cache : N/A Texture Memory : N/A Total : N/A Double Bit Device Memory : N/A Register File : N/A L1 Cache : N/A L2 Cache : N/A Texture Memory : N/A Total : N/A Retired Pages Single Bit ECC : N/A Double Bit ECC : N/A Pending : N/A Temperature GPU Current Temp : 30 C GPU Shutdown Temp : 101 C GPU Slowdown Temp : 96 C Power Readings Power Management : Supported Power Draw : 1.00 W Power Limit : 38.50 W Default Power Limit : 38.50 W Enforced Power Limit : 38.50 W Min Power Limit : 30.00 W Max Power Limit : 38.50 W Clocks Graphics : 135 MHz SM : 135 MHz Memory : 405 MHz Video : 405 MHz Applications Clocks Graphics : 1019 MHz Memory : 2700 MHz Default Applications Clocks Graphics : 1019 MHz Memory : 2700 MHz Max Clocks Graphics : 1293 MHz SM : 1293 MHz Memory : 2700 MHz Video : 1164 MHz Clock Policy Auto Boost : N/A Auto Boost Default : N/A Processes Process ID : 1822 Type : G Name : /usr/bin/X Used GPU Memory : 113 MiB Process ID : 2870 Type : G Name : kwin Used GPU Memory : 18 MiB Process ID : 3118 Type : G Name : /usr/lib/chromium-browser/chromium-browser --type=gpu-process --channel=3081.0.661353739 --disable-breakpad --supports-dual-gpus=false --gpu-driver-bug-workarounds=1,15,21,24,44 --gpu-vendor-id=0x10de --gpu-device-id=0x1380 --gpu-driver-vendor=NVIDIA --gpu-driver-version=361.16 Used GPU Memory : 84 MiB
- Likes 1
Comment
-
Originally posted by bug77 View Post
Well, Nvidia's approach only has to accommodate video drivers.
Debian's approach lets you keep several versions of gcc, python, java and whatnot installed and easily (in theory, at least) switch between them.
Don't compare two different approaches with different scopes, without making sure you understand the difference between approach and scope.
I would like to see some sort of Dependency Injection Management system instead for everything.
Last edited by plonoma; 17 January 2016, 11:46 AM.
Comment
Comment