Announcement

Collapse
No announcement yet.

AMD Announces The Radeon Pro VII

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by AdrianBc View Post

    On the other hand, this Radeon Pro Vega has the best performance per dollar of anything you can buy now and its performance per watt is much better than of anything that can be bought at a non-huge price.
    To have some concrete numbers (for 64-bit floating-point):

    Radeon Pro VII: 26.0 Gflops/W, 3.42 Gflops/USD


    Titan V (GV100): 27.6 Gflops/W, 2.30 Gflops/USD
    Ryzen 9 3900X: 6.95 Gflops/W, 0.91 Gflops/USD (at launch, now it is cheaper, so better performance per dollar)
    Tesla V100S : 32.8 Gflops/W, 0.82 Gflops/USD (pathetic performance per dollar)
    Xeon W-2295 (AVX-512) 8.7 Gflops/W, 0.81 Gflops /USD
    Threadripper 3990X: 10.6 Gflops/W, 0.65 Gflops/USD





    Comment


    • #42
      First Vega with PCIe 4?

      Comment


      • #43
        Originally posted by moriel5 View Post
        Can someone give me a recommendation (for my sister) as to how much VRAM is recommended for professional video editing with Davinci Resolve Pro on Windows 10?

        My country does not import any AMD Pro or Vega-based cards at all, with the cheapest 16GB NVidia option being the equivalent of ~2200$ (with a Radeon VII, or a used Vega Frontier on Amazon, including shipping from the US, being the equivalent of ~900$). Also, given the issues outlined here with GCN/CRDN, would they still be a recommendation for video editing, or does that suffer the same way as gaming (my sister does not game)?

        If so, the next best option would be a 11GB GTX 1080 Ti for the equivalent of ~750$).
        As other people suggested, try asking specific forums, like these:

        Q&A for engineers, producers, editors and enthusiasts spanning the fields of video and media creation



        Comment


        • #44
          Originally posted by ernstp View Post
          First Vega with PCIe 4?
          I believe the Instinct MI-50 and MI-60 cards were first, and this is "next".
          Last edited by bridgman; 14 May 2020, 04:31 AM.
          Test signature

          Comment


          • #45
            Originally posted by bridgman View Post

            Do you mean Crossfire or something different ?

            The links are general purpose - they just give each GPU the ability to access HBM on a linked GPU with very high performance - although I expect they will be used for compute at least as much as for graphics.
            No, I mean like DirectGMA or SR-IOV or something that lets you abstract GPUs - so it doesn't have to be 1GPU per display. Considering GPU passthrough is for multiple VMs has become nicer, but they are still physically stuck to displays, unfortunately.

            Comment


            • #46
              Originally posted by coder View Post
              Radeon VII:
              • is 300 W (this is 250)
              • runs at higher clocks (boosts to 1750 MHz instead of 1700 MHz)
              • has only PCIe 3.0 (instead of 4.0)
              • has half of the fp64 performance
              • has no over-the-top Infinity Link connector for direct card-to-card communication.
              • costs $700 (this lists for $1900)
              To my knowledge it's the first card with HBM and ECC memory protection, which would have more impact if it was SR-IOV compatible.
              Last edited by bogdanbiv; 14 May 2020, 10:10 AM.

              Comment


              • #47
                Originally posted by make_adobe_on_Linux! View Post
                How will it allow for multi-GPU (using passthrough) -> single display?
                Originally posted by make_adobe_on_Linux! View Post
                No, I mean like DirectGMA or SR-IOV or something that lets you abstract GPUs - so it doesn't have to be 1GPU per display. Considering GPU passthrough is for multiple VMs has become nicer, but they are still physically stuck to displays, unfortunately.
                OK, so maybe you are talking about associating each display connector with a fraction of a GPU's processing power, ie splitting a single board into multiple virtual GPUs and then simulating physical GPUs for each of those virtual GPUs so that they could be independently passed through to a host OS ?
                Test signature

                Comment


                • #48
                  Originally posted by bridgman View Post



                  OK, so maybe you are talking about associating each display connector with a fraction of a GPU's processing power, ie splitting a single board into multiple virtual GPUs and then simulating physical GPUs for each of those virtual GPUs so that they could be independently passed through to a host OS ?
                  Nope. Let's take this scenario: I get the latest dual GPU AMD card. I dedicate 1 GPU to the host, then I dedicate the 2nd GPU to a guest VM. I cannot see the both on the same display unless I want to mess with some other layers via software emulation.

                  Comment


                  • #49
                    Originally posted by make_adobe_on_Linux! View Post
                    Nope. Let's take this scenario: I get the latest dual GPU AMD card. I dedicate 1 GPU to the host, then I dedicate the 2nd GPU to a guest VM. I cannot see the both on the same display unless I want to mess with some other layers via software emulation.
                    Not sure we're converging yet. The only dual-GPU cards we make today are (a) the Vega II Duo board for Mac Pro and (b) the dual-Vega V340 server card which has no display connectors.

                    Are you talking about configuring two of the new Radeon Pro VII cards ?
                    Test signature

                    Comment


                    • #50
                      Originally posted by bridgman View Post

                      Not sure we're converging yet. The only dual-GPU cards we make today are (a) the Vega II Duo board for Mac Pro and (b) the dual-Vega V340 server card which has no display connectors.

                      Are you talking about configuring two of the new Radeon Pro VII cards ?
                      Right. So I'm saying for all of these cases I want this ability:
                      -- for the Vega II Duo
                      -- when using multiple cards (and one display)
                      -- future multi-GPU cards

                      Comment

                      Working...
                      X