Announcement

Collapse
No announcement yet.

AMD Announces The Radeon PRO W7800/W7900 Series

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by nyanmisaka View Post


    So AMD why are you always selling hardware before the software stack is ready? Your competitor provides Day Zero CUDA support for many years.
    This was only an announcement, the cards are not available yet. Maybe we will have ROCm support by the launch date

    Comment


    • #22
      Originally posted by ms178 View Post
      Thanks for the hint, but I still don't get why I would need a professional card for these apps specifically. What is the benefit of paying up 4x from a 7900XTX here? Sure, there are certified drivers and more VRAM, but I could use video editing apps just as good with a consumer card where both of these points don't really make a difference. There might be other apps where it might make a difference though, maybe the drivers are better optimized for CAD etc. but that would be more of an artificial limitation of the consumer cards, not a genuine advantage of these professional cards.
      i am a casual prosumer and am fond of consumer hardware (and i never ended up working at a studio... yet?) so i dont have direct insight, but it seems like a lot of times the software, driver, and support period certifications are important (to managers/execs/budgeters/studios), quite different from the chaos of gaming drivers

      these also have ECC and as anandtech pointed out, unexpected displayport 2.1 UHBR20 (capable of 10bit 8k 60hz 4:4:4 lossless)

      the vram size matters as well, time really is money at studios, thrashing out of vram is likely unacceptable (i'm curious to see this myself, if the apps even even support system ram for vram use)

      form factors are commonly 2 slot, pcbs are not extra wide/tall, and fans often blow out the back instead of into the case (also designed for sli/nvlink/crossfire without empty slot gaps), options hard to find and sometimes nonexistant on gaming cards, we may never know if this means different gpu binning to make sure power draw is consistent between chips

      the smaller card in this announcement doesnt exist in consumer, though it seems like an obvious candidate for an upcoming 7800 series

      what i found interesting to hear is that the gpu chiplet strategy began in 2017(!) before ryzen even had much success, before raytracing, before accessible ai

      Comment


      • #23
        Originally posted by stormcrow View Post
        All Apple M processors have a neural processor. Future PCs will eventually have a neural processor of some kind on the CPU if not the GPU. That means developers need tools and they need hardware to develop those tools and models.
        I don't think it is going that well. For example hardware acceleration for Torch on Apple is targeting the GPU, not the neural processor. I think neural processors right now are too specialized and people need more in line of general purpose computing.

        Comment


        • #24
          Originally posted by nyanmisaka View Post


          So AMD why are you always selling hardware before the software stack is ready? Your competitor provides Day Zero CUDA support for many years.

          If I remember correctly, the same happened after the RX 6000 series was just released. The community complained for months before it was supported.

          Intel still has an excuse to take responsibility for their late software as this is their first time into the discrete GPU market. However, AMD/ATI is already an old player in this area.
          There is no excuse. AMD is years behind and has too many fights to take at the same time.
          Hopefully their stellar financial growth and Intel shedding engineers left and right gives them the occasion to grow their numbers and boost into a reasonable release schedule.

          And you'd have to see the AMD subreddit. It is a miasma of people demanding AI support, compute support, consumer card compute support, VR, power draw fixes...problems are many, solutions are slow, that's what you get.

          I still want to believe that they'll eventually power through with all that money we've been giving them and start getting things on time. It's not because they released RDNA 3 in 2022 that RDNA 3 is ready even now in 04/2023...

          Comment


          • #25
            Originally posted by pegasus View Post
            Totally depends on what you want to do and what tools you want/need to use. You can go a long way with medium desktop card with 8GB of ram if you use something like https://github.com/geohot/tinygrad and play with a dataset that fits on your usb thumb drive. But for "professional" things built with pytorch, tensorflow, jax or some other monstrous pile of crap, the sky is the limit.
            Also these "professional" things nowdays support ROCm and SYCL (and some more exotic things) if you build them yourself (or find appropriate prebuilt containers). But be warned - building these things is not for the faint-hearted.
            My "Sky" will stop at 20Go of VRAM, I've taken a 7900 xt. Hopefully that's enough to dabble, and dabble well even with pytorch and try LLM efforts
            I indeed will suffer at compiling that, especially since I only use a 5600x and I expect that it'll be pretty poor at the whole "giant codebase" job. Oh well. Patience and lots of debugging I expect.

            Comment


            • #26
              Originally posted by Mahboi View Post
              Patience and lots of debugging I expect.
              Take a look at EasyBuild, it can save you weeks of frustrations. Although its ROCm support is still in the making, but then you can contribute in developing it properly.

              Comment

              Working...
              X