Announcement

Collapse
No announcement yet.

NVIDIA GeForce RTX 2060 Linux Performance From Gaming To TensorFlow & Compute

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • torsionbar28
    replied
    Originally posted by debianxfce View Post

    https://www.pcgamesn.com/amd/amd-rad...nd-performance

    AMD is confident in its ability to take on the similarly priced Nvidia RTX 2080, and therefore the GTX 1080 Ti too.

    The Sapphire Radeon VII's official product page has gone live, revealing for the first time the memory speed and power consumption of the AMD Radeon VII 7nm graphics card.


    The Sapphire Radeon VII will be available for $699 starting on February 7.
    Yes the new VII is priced the same as RTX 2080, and I truly hope they reach or exceed RTX 2080 performance with it. But at 300w TDP compared to the 225w of the RTX2080, it is clearly not a one for one competitor. My point has always been that AMD's current desktop lineup is derived from their console chips, as consoles are AMD's current focus. That won't change for another two years probably.

    Leave a comment:


  • torsionbar28
    replied
    Originally posted by debianxfce View Post
    The Fnatic team gets new gaming PCs form AMD with the 3rd gen Ryzen CPU and the Radeon VII GPU card this year. Source: CES 2019.
    Great, but that's e-sports, which doesn't require a monster gpu. When I said "high end" I was referring to an Rtx 2080 competitor.

    Leave a comment:


  • torsionbar28
    replied
    Originally posted by vegabook View Post

    Hilarious! The entire article gives you 9 pages of valid reasons to do exactly that. Spankin' AMD's finest with their budget card basically on all games, and an embarrassing absence of red bars on the machine learning charts altogether!

    I've been an ATI/AMD fan since the 8514 Ultra (yeah, stone age)... and even I just cannot ignore the fact that under AMD's stewardship, ATI has been wrecked and Nvidia now owns the show. Let's HOPE AND PRAY that with AMD's newfound market capitalization, they can throw the several billions of dollars of investment in that the RTG division desperately needs, just to survive. The way it's going now, and with time running out before Intel muscles in like a gorilla, RTG will be a console bit-player within 24 months and on the way to becoming the next Imagination Technologies. IE: dead.
    Wow, you don't keep up with the GPU market, or AMD's strategy at all, do you. How sad.

    AMD's strategy today is targeting the lucrative console market. And they beat intel and nvidia every time. AMD has the PS4 and the XboxOne, and also some Chinese consoles not sold in the US. AMD wins consoles. AMD's peecee games strategy today is to win the middle ground, the price/performance segment, and as you can see in any Phoronix benchmarks, AMD crushes the "performance per dollar" charts. AMD wins again. And for those who care about open source, AMD's open-source drivers perform on-par with their proprietary ones, while NVidia gives Linux users the middle finger. AMD wins again.

    If you had watched any of AMD's presentations, or listened to their earnings calls, you would know that AMD is not targeting the high end PC gaming market, not until ~2021 anyways.

    Maybe next time do a little research, eh?

    Leave a comment:


  • mrwhat
    replied
    Originally posted by Michael View Post
    The rtx 2060 benchmarks that you published are missing fp32 tests.
    You mostly included fp16.

    So will you also bench the tensorflow training with fp32.

    Leave a comment:


  • mrwhat
    replied
    Originally posted by phoronix View Post
    Phoronix: NVIDIA GeForce RTX 2060 Linux Performance From Gaming To TensorFlow & Compute

    Yesterday NVIDIA kicked off their week at CES by announcing the GeForce RTX 2060, the lowest-cost Turing GPU to date at just $349 USD but aims to deliver around the performance of the previous-generation GeForce GTX 1080. I only received my RTX 2060 yesterday for testing but have been putting it through its paces since and have the initial benchmark results to deliver ranging from the OpenGL/Vulkan Linux gaming performance through various interesting GPU compute workloads. Also, with this testing there are graphics cards tested going back to the GeForce GTX 960 Maxwell for an interesting look at how the NVIDIA Linux GPU performance has evolved.

    http://www.phoronix.com/vr.php?view=27373
    The tests are missing VGG16 fp32 & Resnet fp32

    Leave a comment:


  • tuke81
    replied
    Originally posted by pracedru View Post

    I am working on this:




    Yes, and i am using older GLSL shaders (#version 130)



    I take it that you are using Redhat or Suse then?
    At my old work we had two Redhat clusters and a Suse cluster.
    We tried with Debian as an experiment, but since Ansys didn't support Debian and our IT admin guy wasn't able to convince the management that he would be able to keep that environment stable they decided to switch away. I guess maybe they didn't like the idea that company would be dependent on him.
    I try to stay away from CUDA since i don't like the idea that one company should own a framework in that manner.
    When i get to implementing FEM/CFD solvers, i'm definately going to look into PyViennaCL. I hope that OpenCL 1.1 will suffice since Nvidia wont support 1.2.

    What happens if you download and install the drivers directly from the Nvidia website?





    So do you use CUDA or not then?
    Uhm nvidia has full support for OpenCL 1.2, it came very late but every nvidia card since Kepler supports OpenCL 1.2.

    Leave a comment:


  • Jabberwocky
    replied
    Originally posted by AndyChow View Post
    The TensorFlow results are impressive. I've been having so much problems with ROCm, I might just get one, just for the compute.
    I would agree with you buying something like RTX 2060 If i needed compute for work or expensive studies. At the moment it's just an interest, I am left with hope that ROCm problems will be solved this year.

    Leave a comment:


  • utrrrongeeb
    replied
    Originally posted by Dedale View Post
    They say that for the additional connectors it is relatively easy because they can measure the voltage on the card and the amperage of the connector via a current clamp. How they did for the PCIE connector they do not explain.
    Originally posted by HenryM View Post
    Pretty sure it involves a custom rewired PCI-E riser that runs the power cables / +voltage connections through a hall-effect multimeter, possibly several multimeters.
    A modern GPU's power consumption varies at a high frequency, up to 100kHz, with almost a TDP-sized range. Unless you've confirmed your meter can handle it, you might need a MHz-speed quad channel digital oscilloscope. Tom's Hardware DE wrote about this in 2014, and they have a picture of the PCIe riser setup. (Also, some IR photos of an R9 295x2 with the VRMs above 100°C…)

    Leave a comment:


  • vegabook
    replied
    Originally posted by torsionbar28 View Post

    x2, there really is no valid reason for a Linux user to choose nvidia these days.
    Hilarious! The entire article gives you 9 pages of valid reasons to do exactly that. Spankin' AMD's finest with their budget card basically on all games, and an embarrassing absence of red bars on the machine learning charts altogether!

    I've been an ATI/AMD fan since the 8514 Ultra (yeah, stone age)... and even I just cannot ignore the fact that under AMD's stewardship, ATI has been wrecked and Nvidia now owns the show. Let's HOPE AND PRAY that with AMD's newfound market capitalization, they can throw the several billions of dollars of investment in that the RTG division desperately needs, just to survive. The way it's going now, and with time running out before Intel muscles in like a gorilla, RTG will be a console bit-player within 24 months and on the way to becoming the next Imagination Technologies. IE: dead.
    Last edited by vegabook; 08 January 2019, 08:06 PM.

    Leave a comment:


  • HenryM
    replied
    Originally posted by Dedale View Post
    They say that for the additional connectors it is relatively easy because they can measure the voltage on the card and the amperage of the connector via a current clamp. How they did for the PCIE connector they do not explain.

    But their tone suggest it was not trivial.
    Pretty sure it involves a custom rewired PCI-E riser that runs the power cables / +voltage connections through a hall-effect multimeter, possibly several multimeters.

    Leave a comment:

Working...
X