Announcement

Collapse
No announcement yet.

AMD Announces The Radeon Pro VII

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • deusexmachina
    replied
    Originally posted by bridgman View Post

    Do you mean Crossfire or something different ?

    The links are general purpose - they just give each GPU the ability to access HBM on a linked GPU with very high performance - although I expect they will be used for compute at least as much as for graphics.
    No, I mean like DirectGMA or SR-IOV or something that lets you abstract GPUs - so it doesn't have to be 1GPU per display. Considering GPU passthrough is for multiple VMs has become nicer, but they are still physically stuck to displays, unfortunately.

    Leave a comment:


  • bridgman
    replied
    Originally posted by ernstp View Post
    First Vega with PCIe 4?
    I believe the Instinct MI-50 and MI-60 cards were first, and this is "next".
    Last edited by bridgman; 14 May 2020, 04:31 AM.

    Leave a comment:


  • Buntolo
    replied
    Originally posted by moriel5 View Post
    Can someone give me a recommendation (for my sister) as to how much VRAM is recommended for professional video editing with Davinci Resolve Pro on Windows 10?

    My country does not import any AMD Pro or Vega-based cards at all, with the cheapest 16GB NVidia option being the equivalent of ~2200$ (with a Radeon VII, or a used Vega Frontier on Amazon, including shipping from the US, being the equivalent of ~900$). Also, given the issues outlined here with GCN/CRDN, would they still be a recommendation for video editing, or does that suffer the same way as gaming (my sister does not game)?

    If so, the next best option would be a 11GB GTX 1080 Ti for the equivalent of ~750$).
    As other people suggested, try asking specific forums, like these:

    Q&A for engineers, producers, editors and enthusiasts spanning the fields of video and media creation



    Leave a comment:


  • ernstp
    replied
    First Vega with PCIe 4?

    Leave a comment:


  • AdrianBc
    replied
    Originally posted by AdrianBc View Post

    On the other hand, this Radeon Pro Vega has the best performance per dollar of anything you can buy now and its performance per watt is much better than of anything that can be bought at a non-huge price.
    To have some concrete numbers (for 64-bit floating-point):

    Radeon Pro VII: 26.0 Gflops/W, 3.42 Gflops/USD


    Titan V (GV100): 27.6 Gflops/W, 2.30 Gflops/USD
    Ryzen 9 3900X: 6.95 Gflops/W, 0.91 Gflops/USD (at launch, now it is cheaper, so better performance per dollar)
    Tesla V100S : 32.8 Gflops/W, 0.82 Gflops/USD (pathetic performance per dollar)
    Xeon W-2295 (AVX-512) 8.7 Gflops/W, 0.81 Gflops /USD
    Threadripper 3990X: 10.6 Gflops/W, 0.65 Gflops/USD





    Leave a comment:


  • AdrianBc
    replied
    Originally posted by schmidtbag View Post
    You could has 6 a while ago. Some of the FirePro W9000 series has 6.


    Speaking of which, is that the only difference between this and the Radeon VII? Also... wasn't the Radeon VII basically just a binned workstation GPU? If so, does that mean this is a bin of a bin?

    The most important differences are that this Pro version has almost a double speed for 64-bit floating-point computations and that it may use ECC error management for the memory.


    The double speed is very significant, it brings the total performance and also the performance per watt at almost the same values as for the NVIDIA Titan V with Volta GV100.

    However, that NVIDIA card is no longer available, it was much more expensive and it was reputed as unreliable.

    The professional NVIDIA Volta card have only slightly better performance than this Radeon Pro, but their price is many times higher, making their performance per dollar much lower than for AMD Epyc or Intel Xeon.

    Therefore, for 64-bit computations, it makes sense to use NVIDIA cards only if the price does not matter, but a low power consumption is essential.

    On the other hand, this Radeon Pro Vega has the best performance per dollar of anything you can buy now and its performance per watt is much better than of anything that can be bought at a non-huge price.












    Leave a comment:


  • coder
    replied
    Originally posted by Qaridarium View Post
    Your exclusive contract with apple over the VEGA20 chips with full 4096 shader is really bad.
    Do you know that exists, or are you just fishing? Either way, I doubt he knows and probably wouldn't confirm such a thing, if he did know about it.

    My guess is that yield on Vega 20 is low enough that supplying Apple with the 64 CU chips eats up too much of the top binned chips, simply leaving too few for the rest of the market. But, guess what? Apple is charging like $1k more for Radeon Pro Vega II than the list price of this card, so for paying top dollar, Apple gets the top chips.

    Originally posted by Qaridarium View Post
    i maybe can understand if you do not allow the 4096 on a radeon7 700€ desktop card. but limit your 1900 dollar VII / 7 PRO to 3840 shader cores is pure madness
    Pure madness? You're only losing 1/16th of the total chip, and you can make up for some of that with a little more clock speed, using the power not being consumed by those CUs.

    Do you know what Nvidia charges for comparable Qudaro cards? Heck, do you know what they charge for the Titan V, which has 1/4 of its memory + bandwidth lopped off?

    Some folks on here are just ready to scream "bloody murder" about the smallest things...

    Leave a comment:


  • coder
    replied
    Originally posted by pipe13 View Post
    CUDA kernels ... the GEMM results alone prompt me to seriously consider an FP64 GPU should I ever upgrade my hardware and take this thing to production.
    Good luck with that. The cheapest Nvidia card with full fp64 support is the $3k Titan V. Before that, you had to pay about $9k for a Quadro P100. And I could imagine Titan V not being replaced with any other HPC GPUs, meaning you'll be back to facing a near $10k price tag.

    Originally posted by pipe13 View Post
    CPU's are considerably easier to program, and I might be better off investing in more CPU cores.
    Still, way less compute power, though. You can drop $7k on an EPYC 7742 that nets you about half as many fp64 TFLOPS as this $1900 graphics card, and about 1/8th the memory bandwidth.

    You could instead look at ThreadRipper 3990X, which is only about $4k, but then your memory bandwidth drops by another factor of 2.

    Leave a comment:


  • coder
    replied
    Originally posted by wizard69 View Post
    Kinda looks like a transitional product to eventually become CDNA. I'm actually surprised that they have included I/O ports.
    The headless, server version already shipped back in November 2018. It's called MI50 or MI60 (the slightly higher-spec version). Since then, AMD replaced the MI60 with a better version of the MI50 (32 GB of HBM2).

    So, the whole point of this card is to be a workstation graphics card. That's the product that was missing in their stack.

    According to Michael's coverage of Arcturus, future GCN chips will lack graphics blocks, meaning this is probably the last AMD card you can buy that both has full fp64 and full 3D acceleration.

    Leave a comment:


  • pal666
    replied
    Originally posted by seesturm View Post
    No, PRO cards don't support SR-IOV.
    s/don't support/can't has support/

    Leave a comment:

Working...
X