Originally posted by Dedale
View Post
Announcement
Collapse
No announcement yet.
NVIDIA GeForce RTX 2060 Linux Performance From Gaming To TensorFlow & Compute
Collapse
X
-
Michael Larabel
https://www.michaellarabel.com/
-
Originally posted by Michael View Post
(...)So not sure how that French site was doing it if they were just going off the NVIDIA driver exposed 'value' or just some math estimates or what.
They say that for the additional connectors it is relatively easy because they can measure the voltage on the card and the amperage of the connector via a current clamp. How they did for the PCIE connector they do not explain.
But their tone suggest it was not trivial.
They also say most of the energy goes trough the 12V rail and not the 3V one but this is not a surprise.
Anyway very interesting tests, thank you.
Comment
-
Originally posted by Dedale View Post
I have a partial explanation here: https://www.hardware.fr/articles/781...raphiques.html
They say that for the additional connectors it is relatively easy because they can measure the voltage on the card and the amperage of the connector via a current clamp. How they did for the PCIE connector they do not explain.
But their tone suggest it was not trivial.
They also say most of the energy goes trough the 12V rail and not the 3V one but this is not a surprise.
Anyway very interesting tests, thank you.Michael Larabel
https://www.michaellarabel.com/
Comment
-
Originally posted by Kemosabe View Post
And which CAD program for *Linux* is this supposed to be?
Originally posted by Kemosabe View PostI can imagine that you have as little issues with an CAD program as with a game tho.
Originally posted by Kemosabe View PostI'm more referring to the scientific field which includes the development of production ready programs.
Nvidia's advertisement strategy is highly aggressive and goes so far that it is not even possible to purchase non-nvidia quadro products for the labs as part of a university supplier agreement.
Also, it is indeed a nightmare to develop with this because of not only proprietary drivers in a heterogeneous environment with a number of LTS Linux distributions with chronically slightly outdated software stack and CUDA, which of course only runs with NVIDIA. But CUDA has such a high market dominance in the field due that there is no way around it.
At my old work we had two Redhat clusters and a Suse cluster.
We tried with Debian as an experiment, but since Ansys didn't support Debian and our IT admin guy wasn't able to convince the management that he would be able to keep that environment stable they decided to switch away. I guess maybe they didn't like the idea that company would be dependent on him.
I try to stay away from CUDA since i don't like the idea that one company should own a framework in that manner.
When i get to implementing FEM/CFD solvers, i'm definately going to look into PyViennaCL. I hope that OpenCL 1.1 will suffice since Nvidia wont support 1.2.
What happens if you download and install the drivers directly from the Nvidia website?
Originally posted by Kemosabe View PostAnd before you keep telling me that nvidia performs better: It might only be relevant in a one man show freelancer company but it is simply unrealistic to assume that the latest generation of dramatically over-prized hardware is available.
De-facto, non-nvidia was the more efficient and cheaper solution.
And for my private use NVIDIA is already a no-no due to EGLStream.
Comment
-
Originally posted by Dedale View PostThey say that for the additional connectors it is relatively easy because they can measure the voltage on the card and the amperage of the connector via a current clamp. How they did for the PCIE connector they do not explain.
But their tone suggest it was not trivial.
Comment
-
Originally posted by torsionbar28 View Post
x2, there really is no valid reason for a Linux user to choose nvidia these days.
I've been an ATI/AMD fan since the 8514 Ultra (yeah, stone age)... and even I just cannot ignore the fact that under AMD's stewardship, ATI has been wrecked and Nvidia now owns the show. Let's HOPE AND PRAY that with AMD's newfound market capitalization, they can throw the several billions of dollars of investment in that the RTG division desperately needs, just to survive. The way it's going now, and with time running out before Intel muscles in like a gorilla, RTG will be a console bit-player within 24 months and on the way to becoming the next Imagination Technologies. IE: dead.Last edited by vegabook; 08 January 2019, 08:06 PM.
- Likes 1
Comment
-
Originally posted by Dedale View PostThey say that for the additional connectors it is relatively easy because they can measure the voltage on the card and the amperage of the connector via a current clamp. How they did for the PCIE connector they do not explain.Originally posted by HenryM View PostPretty sure it involves a custom rewired PCI-E riser that runs the power cables / +voltage connections through a hall-effect multimeter, possibly several multimeters.
- Likes 2
Comment
-
Originally posted by AndyChow View PostThe TensorFlow results are impressive. I've been having so much problems with ROCm, I might just get one, just for the compute.
Comment
-
Originally posted by pracedru View Post
I am working on this:
Yes, and i am using older GLSL shaders (#version 130)
I take it that you are using Redhat or Suse then?
At my old work we had two Redhat clusters and a Suse cluster.
We tried with Debian as an experiment, but since Ansys didn't support Debian and our IT admin guy wasn't able to convince the management that he would be able to keep that environment stable they decided to switch away. I guess maybe they didn't like the idea that company would be dependent on him.
I try to stay away from CUDA since i don't like the idea that one company should own a framework in that manner.
When i get to implementing FEM/CFD solvers, i'm definately going to look into PyViennaCL. I hope that OpenCL 1.1 will suffice since Nvidia wont support 1.2.
What happens if you download and install the drivers directly from the Nvidia website?
So do you use CUDA or not then?
Comment
-
Originally posted by phoronix View PostPhoronix: NVIDIA GeForce RTX 2060 Linux Performance From Gaming To TensorFlow & Compute
Yesterday NVIDIA kicked off their week at CES by announcing the GeForce RTX 2060, the lowest-cost Turing GPU to date at just $349 USD but aims to deliver around the performance of the previous-generation GeForce GTX 1080. I only received my RTX 2060 yesterday for testing but have been putting it through its paces since and have the initial benchmark results to deliver ranging from the OpenGL/Vulkan Linux gaming performance through various interesting GPU compute workloads. Also, with this testing there are graphics cards tested going back to the GeForce GTX 960 Maxwell for an interesting look at how the NVIDIA Linux GPU performance has evolved.
http://www.phoronix.com/vr.php?view=27373
- Likes 1
Comment
Comment