Announcement

Collapse
No announcement yet.

AMD Announces The Radeon Pro VII

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by zxy_thf View Post
    A few differences I noticed:
    1. TDP: 295W->250W, likely due to the improvements from TSMC
    2. FP64: 3.5TFLOPS -> 6.5 TFLOPS, although I'm not sure FP64 is still a thing for GPU computing...
    3. Infinity fabric for GPU P2P.
    Kinda looks like a transitional product to eventually become CDNA. I'm actually surprised that they have included I/O ports.

    Comment


    • #32
      Originally posted by zxy_thf View Post
      A few differences I noticed:
      1. TDP: 295W->250W, likely due to the improvements from TSMC
      2. FP64: 3.5TFLOPS -> 6.5 TFLOPS, although I'm not sure FP64 is still a thing for GPU computing...
      3. Infinity fabric for GPU P2P.
      FP64 is very definitely still a thing. In my current image processing application I too thought it wouldn't be. But I was wrong. A key iterative algorithm converges much more uniformly in double precision than single. Color me surprised. My parallelization is done via TBB, with optional CUDA kernels and NVBLAS. I've a consumer-grade GTX-960 upon which NVBLAS speeds FP32 GEMM considerably -- I'm recalling about 40% -- and the CUDA kernels maybe another 5. I've a *lot* more profiling to do before investing more development time in CUDA or OpenACC, but the GEMM results alone prompt me to seriously consider an FP64 GPU should I ever upgrade my hardware and take this thing to production.

      OTOH, there's a limit to how far parallelization can take this sort of algorithm. CPU's are considerably easier to program, and I might be better off investing in more CPU cores.

      Comment


      • #33
        Originally posted by coder View Post
        Try asking on a forum specific to that product, or at least focused on professional video editing.
        Thanks, that would probably be the right thing, though I had posted here, since I prefer to get information that is more of the engineering kind (I like to learn about how it works, after all).

        Comment


        • #34
          Originally posted by bridgman View Post
          [---]
          Your exclusive contract with apple over the VEGA20 chips with full 4096 shader is really bad.
          i maybe can understand if you do not allow the 4096 on a radeon7 700€ desktop card.

          but limit your 1900 dollar VII / 7 PRO to 3840 shader cores is pure madness

          just tell me how much does apple pay to AMD to Castrate the 4096 shader chips to only 3840 shader cores ?

          is there a date when the contract tuns out? can we have radeon 7 and radeon 7 pro with full 4096 shader cores after the expiring date of the exclusive contract with apple?

          AMD really should release the 4096 shader version after the contract ends.
          Phantom circuit Sequence Reducer Dyslexia

          Comment


          • #35
            Originally posted by ms178 View Post
            The Radeon VII was announced in January 2019 and the first two items are due to improved yields on 7nm
            Do you know that, or are you just guessing? Because I think you've got it backwards: the Radeon VII is the one which clocks higher, which I assume is the main reason it uses more power.

            Keep in mind that Vega 20 was already on the market for nearly 6 months (i.e. in the form of the Instinct series), by the time Radeon VII shipped. And if yields are now so good, why is this new card still limited to 60 CU? And why did AMD kill off the MI60, which had 64 CU?

            Comment


            • #36
              Originally posted by seesturm View Post
              No, PRO cards don't support SR-IOV.
              s/don't support/can't has support/

              Comment


              • #37
                Originally posted by wizard69 View Post
                Kinda looks like a transitional product to eventually become CDNA. I'm actually surprised that they have included I/O ports.
                The headless, server version already shipped back in November 2018. It's called MI50 or MI60 (the slightly higher-spec version). Since then, AMD replaced the MI60 with a better version of the MI50 (32 GB of HBM2).

                So, the whole point of this card is to be a workstation graphics card. That's the product that was missing in their stack.

                According to Michael's coverage of Arcturus, future GCN chips will lack graphics blocks, meaning this is probably the last AMD card you can buy that both has full fp64 and full 3D acceleration.

                Comment


                • #38
                  Originally posted by pipe13 View Post
                  CUDA kernels ... the GEMM results alone prompt me to seriously consider an FP64 GPU should I ever upgrade my hardware and take this thing to production.
                  Good luck with that. The cheapest Nvidia card with full fp64 support is the $3k Titan V. Before that, you had to pay about $9k for a Quadro P100. And I could imagine Titan V not being replaced with any other HPC GPUs, meaning you'll be back to facing a near $10k price tag.

                  Originally posted by pipe13 View Post
                  CPU's are considerably easier to program, and I might be better off investing in more CPU cores.
                  Still, way less compute power, though. You can drop $7k on an EPYC 7742 that nets you about half as many fp64 TFLOPS as this $1900 graphics card, and about 1/8th the memory bandwidth.

                  You could instead look at ThreadRipper 3990X, which is only about $4k, but then your memory bandwidth drops by another factor of 2.

                  Comment


                  • #39
                    Originally posted by Qaridarium View Post
                    Your exclusive contract with apple over the VEGA20 chips with full 4096 shader is really bad.
                    Do you know that exists, or are you just fishing? Either way, I doubt he knows and probably wouldn't confirm such a thing, if he did know about it.

                    My guess is that yield on Vega 20 is low enough that supplying Apple with the 64 CU chips eats up too much of the top binned chips, simply leaving too few for the rest of the market. But, guess what? Apple is charging like $1k more for Radeon Pro Vega II than the list price of this card, so for paying top dollar, Apple gets the top chips.

                    Originally posted by Qaridarium View Post
                    i maybe can understand if you do not allow the 4096 on a radeon7 700€ desktop card. but limit your 1900 dollar VII / 7 PRO to 3840 shader cores is pure madness
                    Pure madness? You're only losing 1/16th of the total chip, and you can make up for some of that with a little more clock speed, using the power not being consumed by those CUs.

                    Do you know what Nvidia charges for comparable Qudaro cards? Heck, do you know what they charge for the Titan V, which has 1/4 of its memory + bandwidth lopped off?

                    Some folks on here are just ready to scream "bloody murder" about the smallest things...

                    Comment


                    • #40
                      Originally posted by schmidtbag View Post
                      You could has 6 a while ago. Some of the FirePro W9000 series has 6.


                      Speaking of which, is that the only difference between this and the Radeon VII? Also... wasn't the Radeon VII basically just a binned workstation GPU? If so, does that mean this is a bin of a bin?

                      The most important differences are that this Pro version has almost a double speed for 64-bit floating-point computations and that it may use ECC error management for the memory.


                      The double speed is very significant, it brings the total performance and also the performance per watt at almost the same values as for the NVIDIA Titan V with Volta GV100.

                      However, that NVIDIA card is no longer available, it was much more expensive and it was reputed as unreliable.

                      The professional NVIDIA Volta card have only slightly better performance than this Radeon Pro, but their price is many times higher, making their performance per dollar much lower than for AMD Epyc or Intel Xeon.

                      Therefore, for 64-bit computations, it makes sense to use NVIDIA cards only if the price does not matter, but a low power consumption is essential.

                      On the other hand, this Radeon Pro Vega has the best performance per dollar of anything you can buy now and its performance per watt is much better than of anything that can be bought at a non-huge price.












                      Comment

                      Working...
                      X