Announcement

Collapse
No announcement yet.

Two Hacks For The NVIDIA Linux Graphics Driver

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    A tricky way to enable the Quadro OpenGL code is still missing fglrx has the same artificial limitation, it can be patched in a very simple way but in my benchmarks there was no diff for games just for specviewperf and i dont use any tool that is simulated with it. Basically radeon could enable all FireGL features on standard Radeon hardware as well.

    Comment


    • #17
      Originally posted by Kano View Post
      A tricky way to enable the Quadro OpenGL code is still missing fglrx has the same artificial limitation, it can be patched in a very simple way but in my benchmarks there was no diff for games just for specviewperf and i dont use any tool that is simulated with it. Basically radeon could enable all FireGL features on standard Radeon hardware as well.
      680 and Titan can be hardware-modded into Quadros, its all about soldering a few resistors.

      But the problem is that Nvidia does not want that. It does not want modding, it does not want overclocking, it does not want these features to be turned on geforce or "pseudo-"quadro.

      The root of the problem is Nvidia's greed. It wants you to pay four digit amount to have it. For example, the very same GPUs are powering Tesla and Quadro; each sell profits Nvidia with tenfold-hunderd fold income (card cost, insurance cost, replacement guarantee cost; ie check Nvidia's Amazon deal) compared to under 200$ customer GPU.
      In case of geforce, the GPU is produced at rates that are nearly equal to production cost. Nvidia does not want you to burn their GPU, but it does want you to prefer its GPU to that of concurrence, for sake of marketshare only.
      This is idiotic beyond scope, once you understand this you would hardly ever purchase Nvidia.
      Last edited by brosis; 07-22-2013, 06:03 AM.

      Comment


      • #18
        Originally posted by brosis View Post
        680 and Titan can be hardware-modded into Quadros, its all about soldering a few resistors.
        As the Linux PCI subsystem is completely open source, one could always patch it to report the PCI ID of a Quadro whenever it detects a GeForce. Wouldn't that be enough to fool the driver into thinking it's a Quadro without needing to get out the soldering iron, or is there more to it than that?

        Comment


        • #19
          Originally posted by brosis View Post
          680 and Titan can be hardware-modded into Quadros, its all about soldering a few resistors.

          But the problem is that Nvidia does not want that. It does not want modding, it does not want overclocking, it does not want these features to be turned on geforce or "pseudo-"quadro.

          The root of the problem is Nvidia's greed. It wants you to pay four digit amount to have it. For example, the very same GPUs are powering Tesla and Quadro; each sell profits Nvidia with tenfold-hunderd fold income (card cost, insurance cost, replacement guarantee cost; ie check Nvidia's Amazon deal) compared to under 200$ customer GPU.
          In case of geforce, the GPU is produced at rates that are nearly equal to production cost. Nvidia does not want you to burn their GPU, but it does want you to prefer its GPU to that of concurrence, for sake of marketshare only.
          This is idiotic beyond scope, once you understand this you would hardly ever purchase Nvidia.
          Meanwhile: people complain about AMD's drivers and FOSS in general.

          Comment


          • #20
            In the case that nvidia works similar to amd then the chip reports that it is a workstation chip, it has nothing to do with the pci id. the id does not change. but you can fake the result

            Comment


            • #21
              What is is it with "nvidia-smi"? It works just fine on my 580 with latest beta driver and reports running cuda applications, fanspeed etc.

              Comment


              • #22
                Originally posted by blackout23 View Post
                What is is it with "nvidia-smi"? It works just fine on my 580 with latest beta driver and reports running cuda applications, fanspeed etc.
                See if reports gpu utilization with this:
                Code:
                nvidia-smi -q --display=UTILIZATION
                --

                I've had an idea to benchmark compositing performance; with this:
                Code:
                #!/bin/bash
                export LD_LIBRARY_PATH=/tmp/nvml_fix/built/319.32/
                while true ; do 
                    gpuuse=$(nvidia-smi -q --display=UTILIZATION |grep Gpu|sed  's/ //g'|cut -d ":" -f 2|cut -d "%" -f 1)
                    echo -n $gpuuse" "
                    #test $gpuuse -gt 1 && echo "higher" || echo "lower"
                    sleep 0.5
                done
                The commented line is something i plan to use to make my gpu switch performance profiles better than the default nvidia strategy that often produces lags in webpage scrolling or window moving (my stock low freqs are REALLY low).

                Anyway, you can monitor the gpu use on little cpu cost (0%cpu on an E7500@3Ghz).
                Code:
                GPU:  9500GT
                Vsync: enabled (without vsync you have always 100% gpu)
                App: glxgears (that is not a benchmark, but in this case it IS)
                WM: Kwin-4.10.5
                Resolution: 1280x1024
                Maximized as a frontmost window: gpu=20% 
                Effects disabled:gpu=10%
                Fullscreen and window unredirected (so effects enabled): 11%
                It would be nice if this could be added to PTS, as a nvidia specific test.

                Comment


                • #23
                  Originally posted by thefirstm View Post
                  I am pretty sure that the pixels of LCD monitors remain "lit" continuously without regard to the refresh rate of the signal. This is, in my opinion, what makes them so much better than CRTs and plasmas because they never flicker, even at "low" refresh rates like 60hz.
                  On any active matrix display, yes, the pixels stay lit constantly. Pretty much all modern TFT LCD's are active matrix LCD's. This also applies to some plasma displays, and OLED displays.

                  The limiting factor on the refresh rate of LCD's is mostly the pixel response time, ie. the time it takes for a pixel to transition from completely lit to completely dark, or vice versa.

                  Comment


                  • #24
                    These seem like pretty nice patches. While they're very niche, they are practical and I can't imagine they're the easiest things to figure out.

                    Comment


                    • #25
                      Originally posted by CFSworks View Post
                      While there's no 1440p monitor that will advertise support for 120Hz, since it's way outside of the DVI spec, there are a few monitors that have been found capable of overclocking that high.

                      (As an FYI, both monitors I mentioned have incorrect EDID checksums, for some reason - you'll have to configure your display server to ignore that or the monitor will not be detected correctly.)
                      Probably whatever "vendor" hacked the EDID to expose the 120Hz mode didn't update the EDID properly to fix the checksum.

                      Comment


                      • #26
                        Originally posted by agd5f View Post
                        Probably whatever "vendor" hacked the EDID to expose the 120Hz mode didn't update the EDID properly to fix the checksum.
                        But that would make too much sense.
                        The vendor never intended for these to do 120Hz, so the EDID only reports 2560x1440@60Hz. To get 120, you have to write the modeline yourself.
                        My guess is the OEM only tested the EDID on Windows, which seems to ignore the EDID checksum.

                        Comment


                        • #27
                          Originally posted by CFSworks View Post
                          But that would make too much sense.
                          The vendor never intended for these to do 120Hz, so the EDID only reports 2560x1440@60Hz. To get 120, you have to write the modeline yourself.
                          My guess is the OEM only tested the EDID on Windows, which seems to ignore the EDID checksum.
                          Some 3rd parties sell pre-hacked monitors with a hacked EDID on ebay for people looking to use 120hz modes. Some users also hack their EDIDs themselves to add the 120hz modes. It's also possible the original vendor also messed up the EDID.

                          Comment


                          • #28
                            Originally posted by agd5f View Post
                            Some 3rd parties sell pre-hacked monitors with a hacked EDID on ebay for people looking to use 120hz modes. Some users also hack their EDIDs themselves to add the 120hz modes. It's also possible the original vendor also messed up the EDID.
                            Possible, but that sounds like a lot more effort than doing it in software, what benefit is there in doing that? You already have to resort to software modifications (in order to overcome the 400 MHz limit) so it seems pointless to waste time figuring out how to modify the monitor's microcontroller just to add a mode that an unmodified display driver is immediately going to reject anyway.

                            Regardless, no, these monitors do not report any mode other than 2560x1440@60Hz. Instead, I think the OEM that makes the PCBs (which seems to be the same across the Catleap, QX2710, and X-Star DP2710) simply didn't verify their EDID checksum, and shipped it without ever realizing that it was wrong.

                            Comment


                            • #29
                              Checksum error QNIX2710

                              Originally posted by CFSworks View Post
                              (As an FYI, both monitors I mentioned have incorrect EDID checksums, for some reason - you'll have to configure your display server to ignore that or the monitor will not be detected correctly.)
                              I'm a newbie with xorg.conf, and EDID. I also have a couple of QNIX2710 monitors that give checksum error in Xorg.0.log. This was not a problem under stock Ubuntu 12.04, but appeared after I installed Nvidia CUDA toolkit.

                              Do you mind pointing me towards any resources that show me how to ignore checksum errors, and how to set-up a correct Monitor section in xorg.conf for this monitor?

                              Thank you

                              Comment


                              • #30
                                Originally posted by koolkao View Post
                                Do you mind pointing me towards any resources that show me how to ignore checksum errors, and how to set-up a correct Monitor section in xorg.conf for this monitor?

                                Thank you
                                Oh, not at all! Here's what my xorg.conf looks like: https://gist.github.com/CFSworks/6101954
                                Note: I'm not an expert at writing xorg.conf's, so please don't use mine as a prime example. Also, I don't know if those timings are optimal, but they do produce pretty stable 120Hz output for me at least.
                                Note #2: When you use NoEdidModes, the nvidia driver seems to insert 800x600 mode on its own. Using this mode will not work; your monitor will start displaying a bunch of test patterns until you set it back to 2560x1440.

                                For an explanation on all of the driver options, see Appendix D of the driver manual.

                                Hope this helps!

                                Comment

                                Working...
                                X