Originally posted by AndyChow
View Post
Announcement
Collapse
No announcement yet.
Vega 12/20 Added To AMDGPU LLVM, Confirms New GCN Deep Learning Instructions For Vega 20
Collapse
X
-
Originally posted by coder View PostSure, it has open issues, but I think it's simplistic to say it fails.
- Likes 1
Comment
-
Originally posted by AndyChow View PostBTW, cpu-only isn't that bad. Most models can be trained in a few hours, and most models don't beat a carefully built probabilistic model, on average. Just throwing more computing power isn't a good solution. Many times these young guys come in and build something they think is great, I build a stupid gamma model, or a two variable beta, and my predictions beat theirs, hands down. Or, and this is humiliating, an exponential or Poisson. So it's nice technology, but removing careful thought and just letting the model define itself through more data and more computing power is IMO not that productive. We aren't there yet.
But there are also models where you can't really model everything. Take image processing for example. Even here, the models can be greatly simplified by using a bit of prior knowledge (e. g. that objects in an image are usually translation invariant to some degree -> you can use a CNN instead of a fully-connected NN and you have to learn a lot less parameters). And still, these models need days to weeks on ten(s) of high performance GPUs...
Comment
-
Originally posted by AndyChow View PostIt fails to build, is what I mean. Current git demands cudnn-7.1.2-1, and trying to trick it simply fails, so far. But I haven't tried that hard. Even the current page admits it fails to build, but I thought I'd try my 20-minutes "pass or smash" skills. No go so far.
I wouldn't be surprised if latest Tensorflow git failed because of some new dependencies, but AFAIK we are running Tensorflow and other frameworks today.Test signature
- Likes 3
Comment
-
Originally posted by bridgman View PostI wouldn't be surprised if latest Tensorflow git failed because of some new dependencies, but AFAIK we are running Tensorflow and other frameworks today.
Comment
-
Originally posted by Tomin View PostRX {460,550,560} are not an option because ROCm needs the atomics with Polaris so that wouldn't work.
Originally posted by Tomin View PostI'm looking for something that has low idle power and not too great demands on full power (just one or no PCIe power connector for example).
Otherwise, maybe get an old Bonaire card? You can find the various models and specs, here:
https://en.wikipedia.org/wiki/List_o..._RX_400_Series
Originally posted by Tomin View Post(Or maybe I should get one of those Tensor USB sticks...)
If you're prototyping an embedded solution and need a way to test/evaluate the inferencing performance of that chip, or if you need to add some inferencing horsepower to a Raspberry Pi class machine, then go for it. Otherwise, any real GPU should kick them to the curb.Last edited by coder; 01 May 2018, 12:51 PM.
- Likes 1
Comment
-
Originally posted by coder View PostI think this requirement is gone, in the latest AMD KFD. Is that true of the mainline 4.17 kernel? Anyway, I didn't think the PCIe atomics requirement was specific to the generation of GPU.
Actually they say that
We do not support ROCm with PCIe Gen 2 enabled CPUs such as the AMD Opteron, Phenom, Phenom II, Athlon, Athlon X2, Athlon II and Older Intel Xeon and Intel Core Architecture and Pentium CPUs.
Elsewhere they give even shorter list of supported hardware:
Anyway, they don't really advertise that it would work on something that doesn't support PCIe atomics apart from this:
Experimental support for our GFX7 GPUs Radeon R9 290, R9 390, AMD FirePro S9150, S9170 note they do not support or take advantage of PCIe Atomics.
Originally posted by coder View PostDepending on the above, your best bet would be a 4 GB RX 560, IMO. RX 460 wouldn't be a bad compromise. Either way, pay attention to the number of shaders (ranges from 896 to 1024). Or even RX 550 (now available in both 512 and 640 shader versions...)
Otherwise, maybe get an old Bonaire card? You can find the various models and specs, here:
https://en.wikipedia.org/wiki/List_o..._RX_400_Series
Originally posted by coder View PostThose are toys, IMO.
If you're prototyping an embedded solution and need a way to test/evaluate the inferencing performance of that chip, or if you need to add some inferencing horsepower to a Raspberry Pi class machine, then go for it. Otherwise, any real GPU should kick them to the curb.
Comment
-
Originally posted by bridgman View Post
When you say "current git" do you mean the repo that coder linked to (which builds and runs on ROCm AFAIK) or latest Tensorflow git ?
I wouldn't be surprised if latest Tensorflow git failed because of some new dependencies, but AFAIK we are running Tensorflow and other frameworks today.
git+https://github.com/tensorflow/tensorflow
nccl problem, which is nVidia tech. I have AMD tech.
Comment
Comment