So... ROCm is definitely still in beta then - good to see progress though and I look forward to more performance and merges in the future
Announcement
Collapse
No announcement yet.
Benchmarking Radeon Open Compute ROCm 1.4 OpenCL
Collapse
X
-
hope this helps. I'm sure someone from AMD can correct/clarify stuff
Originally posted by ernstp View PostNice testing!
I really don't understand all the components though...
KFD
Originally posted by ernstp View PostHSA
HSA+
Originally posted by ernstp View PostRock kernel driver,
Originally posted by ernstp View Postthe other KFD that "is NOT compatible with amdkfd that is distributed as part of the mainline Linux kernel from 3.19 and onward"
Originally posted by ernstp View Postkmt,
Originally posted by ernstp View Postrocr runtime,
hsa runtime,
Originally posted by ernstp View Posthcc,
it can either generate HSAIL/BRIG (intermediate language that needs to be finalized before running on a GPU), or AMDGCN binary code (differs between GCN generations)
hcc allows you to share (C++) source code that gets compiled for either cpu or gpu, or both. stuff that needs to be compiled for GPU still has some limitations (no recursion, ...)
Originally posted by ernstp View Postthunk.
- Likes 3
Comment
-
First of all thank you Michael for doing the testing.
I'm a bit disappointed I have to say.
I thought, after the AMD OpenCL is *converted* to open source, we immediately have the same performance and support of all the gpus.
After this benchmark it looks more like we have to wait again one more year to get the performance level that we could get with Catalyst two years ago. So, it seems there is a lot of work to be done.
I'm happy with the MESA+AMDGPU PRO OpenCL libs solution for now.
Comment
-
Originally posted by mibo View PostI thought, after the AMD OpenCL is *converted* to open source, we immediately have the same performance and support of all the gpus.
Comment
-
Originally posted by pal666 View Postclosed rocm and *converted* to open source rocm have the same performance and support of all the gpus. amdgpu-pro is different code so it has different performance and hardware support
Let's see how long the GCN1.2, 1.1 and 1.0 owners have to wait to get support.
I was hoping that rocm and AMDGPU-PRO OpenCL are not so different.
Comment
-
Originally posted by ernstp View PostI really don't understand all the components though...
ROCK = ROC Kernel = latest KFD (aka amdkfd) + associated amdgpu changes
ROCT = ROC Thunk = hsakmt=thunk (userspace wrapper for KFD, like libdrm-amdgpu is userspace wrapper for amdgpu)
ROCR = ROC Runtime = userspace driver that exposes ROC for use by various toolchains (eg HCC, OpenCL etc..)
We initially implemented the HSA stack for APUs, where the IOMMUv2 allows GPU access to unpinned memory, which was important for upstreaming since the alternative (allowing pinning from userspace) is generally not considered upstreamable. The initial dGPU support in ROC depended on pinning from userspace, which is common in proprietary drivers but verboten in upstream drivers. As a result, the upstream KFD includes support for Kaveri and Carrizo, but not for dGPUs.
We recently finished an initial implementation of "eviction" code, which provides the illusion of pinning to userspace drivers but actually allows buffers to be temporarily unpinned (suspending the compute process using them) when other processes (typically graphics) require memory.
This allows processes using ROC and user queues to behave similarly to processes using the graphics driver submit path, where userspace drivers are given the illusion of pinned buffers but the kernel drivers only actually guarantee pinning when commands from that process are running on the GPU. We needed a different implementation though, since processes running over amdgpu submit work via the kernel driver while processes running over ROC submit work directly to the HW via queues maintained in userspace.
Now that the eviction code is implemented and public (it's in 1.4) we can start working on upstreaming the latest KFD code, and are doing that now.
Originally posted by boxie View PostSo... ROCm is definitely still in beta then - good to see progress though and I look forward to more performance and merges in the futureLast edited by bridgman; 18 January 2017, 10:38 AM.Test signature
- Likes 3
Comment
-
Originally posted by bridgman View Post
Mostly new names for the same components, associating them with ROC (Radeon Open Compute) branding:
<snip>
OpenCL for ROCM is definitely still in beta (we actually called it a developer preview, so more like an alpha), but not ROCM itself.
Comment
-
Originally posted by Niarbeht View Post
Does HURD actually... do anything yet?
if the question is whether you can use it, debian/HURD runs ~80% of debian packages, including likes of Xorg and iceweasel.
Comment
Comment