Announcement

Collapse
No announcement yet.

ZLUDA Has Been Seeing New Activity For CUDA On AMD GPUs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by tenchrio View Post

    You might want to disclose that you wrote that AMD_GPGPU entry on the opensuse wiki,
    as clear from the fact that the only person on that wiki entry's history is called Svyatko.

    I have also seen multiple instances of people successfully running ROCM and Stablediffusion on APUs.
    There even exist benchmarks comparing the CPU with the GPU performance on said APUs like the Ryzen 5600G.
    And tutorials on how to use the iGPU/APU for running LLM models (again with ROCM).
    In the wiki entry that you wrote you even said: "Integrated AMD GPU without discrete AMD GPU possibly works somehow".

    So I am a bit confused when you say ROCM is not compatible with iGPUs, do you mean it is missing features?
    Or are you referring to the fact you can't use iGPUs with dGPUs at the same time?
    Or did you actually mean "not fully compatible with iGPUs" ?

    Even more confusing is you mentioning the AMDKFD driver but on its own source code page it says that it has been superseded by ROCM and multiple articles from 2018 talk about how AMDKFD got merged into ROCM and AMDGPU, so is it still even its own thing?
    APUs like Phoenix were leaked through ROCM as the code was added before the APU was released, this includes the fact that it would have an RDNA3 igpu. Did AMD code in support for the iGPU but it just remains unused?
    The kernel component for ROCm is called amdkfd and it's part of the amdgpu kernel driver. Same kernel driver used for gfx. The ROCm stack source has support for all AMD GPUs (dGPUs and iGPUs), but the support statements are based on what AMD's QA teams validate and which GPUs are enabled in the ROCm release builds.

    Comment


    • #32
      Originally posted by Jabberwocky View Post

      Blender is less biased than most people out there. Blender tested many APIs in the past few years before making major changes to their compute modules. Rocm improved a lot since but they can't redo everything ever few months.

      Why is cuda code path is faster? Is it because blender optimizes for it more than rocm or is it because Nvidia's cuda implementation is faster due to their investment in their compilers?

      I say the latter because it's apparent in many other projects not just Blender. If simply optimizing rocm in Blender could yeild the same performance as Zluda then I would agree with you, but that's not the case here
      AMD is only starting to support AI and SD - but, still doesn't care about Blender support. Some youtubers have tested Zluda vs HIP /(RT) and supposedly, HIP is a bit faster - so, that's pretty pathetic.
      As for SD, the situation seems really complicated to me - there's so many variables but the open source situation can only help - most ppl say you need to use (dual boot, if a Windows user) Linux to use SD w/ an amd gpu - which, is the point for users here. However, Nvidia is still recommended by most SD users - but, at least, AMD seems to put some focus on AI/SD - even if they neglect Blender usage/development.

      Comment


      • #33
        Originally posted by tenchrio View Post

        You might want to disclose that you wrote that AMD_GPGPU entry on the opensuse wiki,
        as clear from the fact that the only person on that wiki entry's history is called Svyatko.

        I have also seen multiple instances of people successfully running ROCM and Stablediffusion on APUs.
        There even exist benchmarks comparing the CPU with the GPU performance on said APUs like the Ryzen 5600G.
        And tutorials on how to use the iGPU/APU for running LLM models (again with ROCM).
        In the wiki entry that you wrote you even said: "Integrated AMD GPU without discrete AMD GPU possibly works somehow".

        So I am a bit confused when you say ROCM is not compatible with iGPUs, do you mean it is missing features?
        Or are you referring to the fact you can't use iGPUs with dGPUs at the same time?
        Or did you actually mean "not fully compatible with iGPUs" ?

        Even more confusing is you mentioning the AMDKFD driver but on its own source code page it says that it has been superseded by ROCM and multiple articles from 2018 talk about how AMDKFD got merged into ROCM and AMDGPU, so is it still even its own thing?
        APUs like Phoenix were leaked through ROCM as the code was added before the APU was released, this includes the fact that it would have an RDNA3 igpu. Did AMD code in support for the iGPU but it just remains unused?
        About amdkfd - some history:

        Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite




        About using AMD iGPUs for GPGPU: I have doubts about profitability of this option. I can invest my time and efforts in it, but what I will get? I can simply buy Nvidia or Intel dGPU and use it as dedicated accelerator. I can afford myself investigation of AMD iGPU usage only with additional external investments.
        Right now CPU + dGPU pair is more profitable than APU - Ryzen 8x00G APUs cost too much.
        One niche for APU: use it for LLM or another workloads which require a lot of RAM - one can install 128 or 192 GiB of RAM on an ordinary AM4 or AM5 motherboard (with 4 RAM slots) and dedicate up to 50% of RAM for iGPU.

        Comment


        • #34
          This project wouldbe a peefect target for that german open source fund.

          Comment

          Working...
          X