Originally posted by geearf
View Post
Announcement
Collapse
No announcement yet.
AMD Publishes Video To Explain The Radeon Open Compute Stack (ROCm)
Collapse
X
-
Originally posted by PuckPoltergeist View Post
And ROCm use it's own version of llvm. If an application is using mesa (for OpenGL/Vulkan) and ROCm (for OpenCL) like Blender does, both versions of llvm are linked in. This doesn't work and the application crash at start.
Comment
-
Originally posted by geearf View Post
That used to be a problem with RPCS3 but not anymore, I am not sure what they did to workaround that, but it seems feasible.
Comment
-
Originally posted by xception View Post
Actually it's still a problem for me (using blender), if it works for you please let me know as much information as possible to get it working for me as well... like ROCm version, mesa version, llvm libs and versions
RPCS3 has its own llvm that used to not work alongside Mesa's llvm, but after the RPCS3 devs did some changes, I forgot which, it worked fine.
The previous workaround was to use RPCS3 in appimage and it worked fine with whatever llvm Mesa used, maybe that is feasible for Blender as well? Not sure.
Comment
-
Originally posted by illwieckz View PostIf you don't use compute, you don't know what people are talking about and your opinion does not matter.
Comment
-
-
Originally posted by vegabook View PostVulkan doesn't do compute (for now).
Originally posted by vegabook View PostAnd unlike Vulkan, or DX12, webgpu is supported by everyone including Apple, and therefore will be standard in browsers. It's the first api that we're likely to see implemented by everyone.
Comment
-
Originally posted by pal666 View Postonly for brainwashed nvidiots. didn't you read on these forums that amd focuses compute resources on vega because they have large gpgpu vega customer?
Caveat: I dont use nVidia, my RX570(fully FOSS stack) is working beautifully and i love it but i'm not delusional enough to not realize AMD position in compute is a bloody mess at best
Comment
Comment