Announcement

Collapse
No announcement yet.

AMD Launches The Radeon PRO W7500/W7600 RDNA3 GPUs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Why does the title mention W7800 instead of W7600?

    Comment


    • #12
      Originally posted by CommunityMember View Post

      To this point, AMD has only enabled/supported SR-IOV (MxGPU in AMD terms) with their Radeon Pro V series, Instinct M series, and Radeon Pro S series GPUs (all targeted for the data center/cloud providers), and not with any of their consumer or workstation GPUs.
      They kinda do*... but there's a lot of hoops you have to jump through to get it. AMD is just calling it a different name because NIH and apparently everything has to be cloud. AMD is calling it "remote workstation" or "VDI", and it requires Citrix or Microsoft Remote Desktop Services (not RDP, RDPS which is an Azure/365 thing). Apparently AMD's management seems to think the only people that want hardware virtualization are people buying into Microsoft's whole "cloud PC" initiative. Universe forbid anyone actually want on-prem hardware any more... Which is why Nvidia is cleaning up in the workstation, compute, and data center spaces. AMD can't seem to stop chasing Microsoft marketing unicorns where as the compute industry has long since settled on Linux & CUDA (or Windows/CUDA when it comes to visualization workstations because enterprises like certifications). Edit to add: And Nvidia workstations because they have pass through support without the additional requirements.

      *look down at the specs under "additional features" for "remote workstation". NIH naming is annoying. They are using virtual machine technology as if it's a security layer rather than as a resource utilization multiplier, same as Microsoft has been doing (and failing, because virtualization as implemented on x86 systems isn't about security, it's about hardware resource economization. Cloud providers and customers want density at the cost of security, so they're paying the price for that by suffering though the problems inherent in that kind of shared environment.)
      Last edited by stormcrow; 03 August 2023, 04:14 PM.

      Comment


      • #13
        Show me a bench running inference or training on AI models. Or just linear regression on a dataset. From what I've seen, AMD is orders of magnitude slower.

        Oh, and also most model training needs a lot of work to, well, work. If you have a tensorflow lite model, it might work out of the box, otherwise it's debug to the point you need to redefine and rewrite everything from scratch.

        Comment

        Working...
        X