Originally posted by nyanmisaka
View Post
Announcement
Collapse
No announcement yet.
AMD Announces The Radeon PRO W7800/W7900 Series
Collapse
X
-
Originally posted by ms178 View PostThanks for the hint, but I still don't get why I would need a professional card for these apps specifically. What is the benefit of paying up 4x from a 7900XTX here? Sure, there are certified drivers and more VRAM, but I could use video editing apps just as good with a consumer card where both of these points don't really make a difference. There might be other apps where it might make a difference though, maybe the drivers are better optimized for CAD etc. but that would be more of an artificial limitation of the consumer cards, not a genuine advantage of these professional cards.
these also have ECC and as anandtech pointed out, unexpected displayport 2.1 UHBR20 (capable of 10bit 8k 60hz 4:4:4 lossless)
the vram size matters as well, time really is money at studios, thrashing out of vram is likely unacceptable (i'm curious to see this myself, if the apps even even support system ram for vram use)
form factors are commonly 2 slot, pcbs are not extra wide/tall, and fans often blow out the back instead of into the case (also designed for sli/nvlink/crossfire without empty slot gaps), options hard to find and sometimes nonexistant on gaming cards, we may never know if this means different gpu binning to make sure power draw is consistent between chips
the smaller card in this announcement doesnt exist in consumer, though it seems like an obvious candidate for an upcoming 7800 series
what i found interesting to hear is that the gpu chiplet strategy began in 2017(!) before ryzen even had much success, before raytracing, before accessible ai
- Likes 1
Comment
-
Originally posted by stormcrow View PostAll Apple M processors have a neural processor. Future PCs will eventually have a neural processor of some kind on the CPU if not the GPU. That means developers need tools and they need hardware to develop those tools and models.
Comment
-
Originally posted by nyanmisaka View Post
So AMD why are you always selling hardware before the software stack is ready? Your competitor provides Day Zero CUDA support for many years.
If I remember correctly, the same happened after the RX 6000 series was just released. The community complained for months before it was supported.
Intel still has an excuse to take responsibility for their late software as this is their first time into the discrete GPU market. However, AMD/ATI is already an old player in this area.
Hopefully their stellar financial growth and Intel shedding engineers left and right gives them the occasion to grow their numbers and boost into a reasonable release schedule.
And you'd have to see the AMD subreddit. It is a miasma of people demanding AI support, compute support, consumer card compute support, VR, power draw fixes...problems are many, solutions are slow, that's what you get.
I still want to believe that they'll eventually power through with all that money we've been giving them and start getting things on time. It's not because they released RDNA 3 in 2022 that RDNA 3 is ready even now in 04/2023...
Comment
-
Originally posted by pegasus View PostTotally depends on what you want to do and what tools you want/need to use. You can go a long way with medium desktop card with 8GB of ram if you use something like https://github.com/geohot/tinygrad and play with a dataset that fits on your usb thumb drive. But for "professional" things built with pytorch, tensorflow, jax or some other monstrous pile of crap, the sky is the limit.
Also these "professional" things nowdays support ROCm and SYCL (and some more exotic things) if you build them yourself (or find appropriate prebuilt containers). But be warned - building these things is not for the faint-hearted.
I indeed will suffer at compiling that, especially since I only use a 5600x and I expect that it'll be pretty poor at the whole "giant codebase" job. Oh well. Patience and lots of debugging I expect.
Comment
-
Comment