Announcement

Collapse
No announcement yet.

NVIDIA GeForce RTX 2060 Linux Performance From Gaming To TensorFlow & Compute

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by Dedale View Post

    Thank you for your kind answer. May ask you why ? Is it by design or some technical difficulty ?

    A well regarded French website used to do tests including the wattage of the cards themselves and sometimes IR images of the thermal radiation by IR camera. I guess IR cameras aren't cheap. They ceased when their main tester left for greener pastures.

    If you are interested here is an example: https://www.hardware.fr/articles/957...photos-ir.html
    Well, with a WattsUp Pro that PTS can poll the AC power data from it's very easy to monitor the real-time power consumption... Not aware of any easy/accurate way of just monitoring the power usage of a single PCI Express slot and PCIe power connector(s) and not the rest of the system. The Phoronix Test Suite does support reading NVIDIA's exposed sensor for 'power consumption' but that seems to be an estimate and just the GPU core itself and not the related graphics card circuitry. So not sure how that French site was doing it if they were just going off the NVIDIA driver exposed 'value' or just some math estimates or what.
    Michael Larabel
    https://www.michaellarabel.com/

    Comment


    • #22
      Originally posted by Michael View Post

      (...)So not sure how that French site was doing it if they were just going off the NVIDIA driver exposed 'value' or just some math estimates or what.
      I have a partial explanation here: https://www.hardware.fr/articles/781...raphiques.html

      They say that for the additional connectors it is relatively easy because they can measure the voltage on the card and the amperage of the connector via a current clamp. How they did for the PCIE connector they do not explain.

      But their tone suggest it was not trivial.

      They also say most of the energy goes trough the 12V rail and not the 3V one but this is not a surprise.

      Anyway very interesting tests, thank you.

      Comment


      • #23
        Originally posted by Dedale View Post

        I have a partial explanation here: https://www.hardware.fr/articles/781...raphiques.html

        They say that for the additional connectors it is relatively easy because they can measure the voltage on the card and the amperage of the connector via a current clamp. How they did for the PCIE connector they do not explain.

        But their tone suggest it was not trivial.

        They also say most of the energy goes trough the 12V rail and not the 3V one but this is not a surprise.

        Anyway very interesting tests, thank you.
        Yeah if it's not trivial, won't work out for my purposes due to using different systems for different power tests, etc. Plus would need a multimeter and current clamp that can interface via USB or so, in order to allow PTS to then poll the data in real-time, etc. Most other sites seem to just use one manually recorded value (or idle and load values) as opposed to real-time values on a per-test real-time basis as I do with PTS. So yeah for better or worse there doesn't seem to be a better means currently than using the overall AC system power consumption or putting faith in the driver-reported value that just seems to be GPU core itself (I don't put much faith in it though after the days of the AMD fam15h_power Linux driver that reported the 'CPU power consumption' and never seemed to be accurate at all.)
        Michael Larabel
        https://www.michaellarabel.com/

        Comment


        • #24
          Originally posted by Kemosabe View Post

          And which CAD program for *Linux* is this supposed to be?
          I am working on this:


          Originally posted by Kemosabe View Post
          I can imagine that you have as little issues with an CAD program as with a game tho.
          Yes, and i am using older GLSL shaders (#version 130)

          Originally posted by Kemosabe View Post
          I'm more referring to the scientific field which includes the development of production ready programs.
          Nvidia's advertisement strategy is highly aggressive and goes so far that it is not even possible to purchase non-nvidia quadro products for the labs as part of a university supplier agreement.
          Also, it is indeed a nightmare to develop with this because of not only proprietary drivers in a heterogeneous environment with a number of LTS Linux distributions with chronically slightly outdated software stack and CUDA, which of course only runs with NVIDIA. But CUDA has such a high market dominance in the field due that there is no way around it.
          I take it that you are using Redhat or Suse then?
          At my old work we had two Redhat clusters and a Suse cluster.
          We tried with Debian as an experiment, but since Ansys didn't support Debian and our IT admin guy wasn't able to convince the management that he would be able to keep that environment stable they decided to switch away. I guess maybe they didn't like the idea that company would be dependent on him.
          I try to stay away from CUDA since i don't like the idea that one company should own a framework in that manner.
          When i get to implementing FEM/CFD solvers, i'm definately going to look into PyViennaCL. I hope that OpenCL 1.1 will suffice since Nvidia wont support 1.2.

          What happens if you download and install the drivers directly from the Nvidia website?



          Originally posted by Kemosabe View Post
          And before you keep telling me that nvidia performs better: It might only be relevant in a one man show freelancer company but it is simply unrealistic to assume that the latest generation of dramatically over-prized hardware is available.
          De-facto, non-nvidia was the more efficient and cheaper solution.

          And for my private use NVIDIA is already a no-no due to EGLStream.
          So do you use CUDA or not then?

          Comment


          • #25
            Originally posted by Dedale View Post
            They say that for the additional connectors it is relatively easy because they can measure the voltage on the card and the amperage of the connector via a current clamp. How they did for the PCIE connector they do not explain.

            But their tone suggest it was not trivial.
            Pretty sure it involves a custom rewired PCI-E riser that runs the power cables / +voltage connections through a hall-effect multimeter, possibly several multimeters.

            Comment


            • #26
              Originally posted by torsionbar28 View Post

              x2, there really is no valid reason for a Linux user to choose nvidia these days.
              Hilarious! The entire article gives you 9 pages of valid reasons to do exactly that. Spankin' AMD's finest with their budget card basically on all games, and an embarrassing absence of red bars on the machine learning charts altogether!

              I've been an ATI/AMD fan since the 8514 Ultra (yeah, stone age)... and even I just cannot ignore the fact that under AMD's stewardship, ATI has been wrecked and Nvidia now owns the show. Let's HOPE AND PRAY that with AMD's newfound market capitalization, they can throw the several billions of dollars of investment in that the RTG division desperately needs, just to survive. The way it's going now, and with time running out before Intel muscles in like a gorilla, RTG will be a console bit-player within 24 months and on the way to becoming the next Imagination Technologies. IE: dead.
              Last edited by vegabook; 08 January 2019, 08:06 PM.

              Comment


              • #27
                Originally posted by Dedale View Post
                They say that for the additional connectors it is relatively easy because they can measure the voltage on the card and the amperage of the connector via a current clamp. How they did for the PCIE connector they do not explain.
                Originally posted by HenryM View Post
                Pretty sure it involves a custom rewired PCI-E riser that runs the power cables / +voltage connections through a hall-effect multimeter, possibly several multimeters.
                A modern GPU's power consumption varies at a high frequency, up to 100kHz, with almost a TDP-sized range. Unless you've confirmed your meter can handle it, you might need a MHz-speed quad channel digital oscilloscope. Tom's Hardware DE wrote about this in 2014, and they have a picture of the PCIe riser setup. (Also, some IR photos of an R9 295x2 with the VRMs above 100°C…)

                Comment


                • #28
                  Originally posted by AndyChow View Post
                  The TensorFlow results are impressive. I've been having so much problems with ROCm, I might just get one, just for the compute.
                  I would agree with you buying something like RTX 2060 If i needed compute for work or expensive studies. At the moment it's just an interest, I am left with hope that ROCm problems will be solved this year.

                  Comment


                  • #29
                    Originally posted by pracedru View Post

                    I am working on this:




                    Yes, and i am using older GLSL shaders (#version 130)



                    I take it that you are using Redhat or Suse then?
                    At my old work we had two Redhat clusters and a Suse cluster.
                    We tried with Debian as an experiment, but since Ansys didn't support Debian and our IT admin guy wasn't able to convince the management that he would be able to keep that environment stable they decided to switch away. I guess maybe they didn't like the idea that company would be dependent on him.
                    I try to stay away from CUDA since i don't like the idea that one company should own a framework in that manner.
                    When i get to implementing FEM/CFD solvers, i'm definately going to look into PyViennaCL. I hope that OpenCL 1.1 will suffice since Nvidia wont support 1.2.

                    What happens if you download and install the drivers directly from the Nvidia website?





                    So do you use CUDA or not then?
                    Uhm nvidia has full support for OpenCL 1.2, it came very late but every nvidia card since Kepler supports OpenCL 1.2.

                    Comment


                    • #30
                      Originally posted by phoronix View Post
                      Phoronix: NVIDIA GeForce RTX 2060 Linux Performance From Gaming To TensorFlow & Compute

                      Yesterday NVIDIA kicked off their week at CES by announcing the GeForce RTX 2060, the lowest-cost Turing GPU to date at just $349 USD but aims to deliver around the performance of the previous-generation GeForce GTX 1080. I only received my RTX 2060 yesterday for testing but have been putting it through its paces since and have the initial benchmark results to deliver ranging from the OpenGL/Vulkan Linux gaming performance through various interesting GPU compute workloads. Also, with this testing there are graphics cards tested going back to the GeForce GTX 960 Maxwell for an interesting look at how the NVIDIA Linux GPU performance has evolved.

                      http://www.phoronix.com/vr.php?view=27373
                      The tests are missing VGG16 fp32 & Resnet fp32

                      Comment

                      Working...
                      X