Announcement

Collapse
No announcement yet.

NVIDIA RTX 6000 Ada Generation vs. Radeon PRO Performance On Ubuntu Linux 24.04 LTS

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    ROCm 6.2 with the Radeon PRO W700 series graphics cards were crashing when running the Folding@Home benchmark while the NVIDIA GPUs on their OpenCL driver stack were running fine.
    Does rusticl work with F@H?
    ## VGA ##
    AMD: X1950XTX, HD3870, HD5870
    Intel: GMA45, HD3000 (Core i5 2500K)

    Comment


    • #12
      That we all say for so long: From the FP Dual Issue pipeline, the Second Issue doesn't work at all. The Gpu is something like 30Tflops+20Tiops Fused Slow Path, the Fast Path is deactivated due to hardware errors. People can go to justice if they want or refound.

      Comment


      • #13
        Originally posted by zexelon View Post
        Welp, I officially feel good about today

        Ordered one of these in a new server to run AI workloads. Yes, it cost an arm and a leg... but so does a Ferrari ... and to be honest this will generate much more revenue then owning a Ferrari ever will!

        I have looked longingly at AMD to provide any sort of competition in the GPGPU space but alas they are still quite a ways behind... it took them decades to become competitive and eventually become a threat to Intel, maybe they can eventually do the same in the GPGPU space.
        The CPU space has been more lucrative and viable for AMD, with cpu and gpu competing for silicon allocation they made the correct decision to make GPU the red-headed stepchild. The AI-craze may change things, but I wouldn't hold my breath that it would allow them to really compete in consumer GPU. In fact I know it won't, they're going to compete in the mid range at best. Which is fine for consumer, 99% of which are not going for the 4090 monstrosities. AMD have HPC, they have low end and the price-concious raster mid range. intel will keep them honest in those areas. Everything else nvidia is just the solution, unfortunate as it is for open source advocates.

        tl;dr I wouldn't hold my breath for AMD to actually be competitive in GPU beyond low/mid range, unless you count AI/ML as "GPU".

        Comment


        • #14
          performance per dollar?
          those with unlimited budgets-- good on ya.
          2 of em is minimum $14,500.
          better off with 2xA100/40gb? if you can get them on sale?
          Have not used amd for ai, nor intel. only have access to 2xV100/32gb.. barely enough for my use case.

          Comment


          • #15
            Originally posted by geerge View Post

            The CPU space has been more lucrative and viable for AMD, with cpu and gpu competing for silicon allocation they made the correct decision to make GPU the red-headed stepchild. The AI-craze may change things, but I wouldn't hold my breath that it would allow them to really compete in consumer GPU. In fact I know it won't, they're going to compete in the mid range at best. Which is fine for consumer, 99% of which are not going for the 4090 monstrosities. AMD have HPC, they have low end and the price-concious raster mid range. intel will keep them honest in those areas. Everything else nvidia is just the solution, unfortunate as it is for open source advocates.

            tl;dr I wouldn't hold my breath for AMD to actually be competitive in GPU beyond low/mid range, unless you count AI/ML as "GPU".
            Yeah, I think I understand what you are saying and pretty much 100% agree. AMD has to manage their resources and manufacturing carefully. They pulled out a winner in the Zen arch and they are basically doubling down on that against Intel rather than fighting a "two front" war against Intel and Nvidia.

            AMD is not going anywhere... and there is a history (back in the ATI days) for that GPU team pulling out a stunning victory over Nvidia on occasion, looking back to the ATI Radion 9700 series. If it was done once it could happen again... one can always dream.

            Comment


            • #16
              Originally posted by piotrj3 View Post

              yes, but also those cards have optimized drivers for certain software. For example in Siemens NX Nvidia geforce is 1/16 or 1/32th of performance around of professional counterpart
              It has more to do with the fact that for years NVIDIA has been crippling the floating point performance of the gaming cards in order to not cannibalize the sales of the pro caliber cards.

              Comment


              • #17
                Originally posted by edxposed View Post
                RTX 6000 Ada is the most beautiful GPU ever made. It's a shame GeForce series doesn't have a shell like this, each one is designed to satisfy RGB hippies.
                Looks exactly like its older sibling, the RTX 6000 (Ampere).

                Don't disagree it looks nice; all of the Quadro cards look professional. Which fits, of course.

                Comment


                • #18
                  Originally posted by sophisticles View Post

                  It has more to do with the fact that for years NVIDIA has been crippling the floating point performance of the gaming cards in order to not cannibalize the sales of the pro caliber cards.
                  That is not exactly true because in some workloads that performance is not canibilized at all. For example if you are bruteforcing passwords on GPU, or doing CUDA rendering in blender, you get absolute 0 performance penalty, in fact gaming geforces will perform better. In fact if you check by spec A6000 performance in FP16, FP32 and FP64 it will be exactly the same of very similar to geforce counterpart.

                  Entire difference are drivers with special "optimalizations" for those applications (in fact a lot of software like Siemens NX are hardcoded to not use optimalizations unless you use nvidia pro or AMD pro card). Literally it is big f... you for not spending 4x more money on pro card.

                  But some professional applications like Autodesk Inventor will absolutly not care. Yeah they will tell you geforce is not certified hardware but will run absolutely fine.

                  Comment


                  • #19
                    Originally posted by piotrj3 View Post
                    But some professional applications like Autodesk Inventor will absolutly not care. Yeah they will tell you geforce is not certified hardware but will run absolutely fine.
                    And here comes reason why pro-cards exist - consumer gaming gpu not made to run at same performance as pro - as Nvidia said:
                    NVIDIA ‘CUDA – Force P2 State’ Feature Performance Analysis (Off vs. On) — 15 games benchmarked using an RTX 3080 Several years ago, NVIDIA added ... Read more

                    […] Basically, we added this p-state because running at max memory clocks for some CUDA applications can cause memory errors when running HUGE datasets. Think DL apps, oil exploration use cases, etc where you are crunching large numbers and it would error out with full memory clocks. These are the types of apps you really shouldn’t be running on GeForce anyway but since there are a lot of folks who do and were running into this issue we created this new mode for them.

                    It’s basically like a poor man’s version of ECC memory. That’s how we described it way back when…

                    […] And if you’re gaming w/CUDA (say for instance using PhysX) it will give you full clocks. So gamers shouldn’t be affected by this mode.

                    Comment


                    • #20
                      Any chance we can see those Blender tests run on the W7xxx using SCALE or ZLUDA? I recall seeing some tests where CUDA over a translation layer ended up giving better performance than native Radeon HIP.

                      Comment

                      Working...
                      X