No announcement yet.

AMD Clarifies ROCm Compute Support For GUI Applications

  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by LinAGKar View Post
    What's ROCr? Is that different from ROCm? Anyway, I was under the impression that ROCm would be used only on fairly high end GPUs, while low-end GPUs and APUs would use PAL (
    ROCR is the ROC Runtime, one of the layers in the ROCm stack. From top to bottom the core stack for ML on Linux is roughly:

    ML frameworks (Tensorflow / Pytorch etc...)
    MIOpen (CUDNN) and RCCL (NCCL)
    Math libraries
    HIP language runtime
    VDI (virtual device interface, runs over CPU / Orca / PAL / ROCr back ends)
    ROCr back end
    libhsakmt aka "thunk" aka ROCT

    For OpenCL it's the same from amdgpu up to VDI, but with OpenCL language runtime replacing HIP language runtime. HPC is similar to ML except CUDNN and the ML frameworks are not normally used.

    Originally posted by LinAGKar View Post
    Do all RDNA GPUs work with ROCr, and will RDNA APUs do so?
    Yes, right up to OpenCL so far, and yes.

    Originally posted by LinAGKar View Post
    But yes, open-sourcing PAL OpenCL would still leave a place for Clover on Polaris and older, except those that work with ROCm.
    BTW, why can't PAL OpenCL be used on Polaris and earlier, given that AMDVLK works on these?
    It could be - just another pile of work for relatively small benefit at a time when we are not exactly looking for more work.
    Test signature


    • #42
      Originally posted by castlefox View Post

      Can someone please tell me what this clarification from AMD will mean for Folding@home? It needs openCL to work on their video cards.
      To update this, 4.1 has fixed it for me it seems