Announcement

Collapse
No announcement yet.

The Maturing State Of Rusticl For Rust-Based OpenCL Within Mesa

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Developer12
    replied
    Originally posted by Ladis View Post

    I see Rusticl supports only newer GPUs implemented in Gallium, mostly AMD. Majorty of Linux PCs is out and Linux itself is only few % of PCs.
    That's wrong. Rusticl supports nearly every gallium driver, with even *ancient* R600g support in the works. I don't know how you could have missed that the raspi VC4 driver is one of the best supported ones. Oh, and official conformance results on the Apple M1. It'll also run on top of everything with a vulkan driver, even the proprietary nvidia one.

    Leave a comment:


  • Ladis
    replied
    Originally posted by Developer12 View Post
    Rusticl, the thing this article is actually about, is bringing full, conformant OpenCL 3.0 to every GPU that mesa supports (as well as on top of vulkan via zink). In addition to that, projects like chipstar are providing CUDA on top of OpenCL.
    I see Rusticl supports only newer GPUs implemented in Gallium, mostly AMD. Majorty of Linux PCs is out and Linux itself is only few % of PCs.

    Leave a comment:


  • Developer12
    replied
    Originally posted by Ladis View Post

    But you are not writing it, other people do. For them, OpenCL ended (Apple created it and Apple removed it from its OS for new apps, NVidia never fully supported it, AMD had always a buggy driver, to the level they told Blender devs to remove support), Vulkan is not for more complex computations (e.g. limit to 4 GB of continuous data) and SYSL & OpenACC are not known, probably not developed for years and surely having the limits of the common denominator backend API(s). On the other end, CUDA is well available (most of dGPUs are NVidia) and AMD supports it, officially via HIP (recompile the source), unofficially via ZLUDA (works even for binary).
    Rusticl, the thing this article is actually about, is bringing full, conformant OpenCL 3.0 to every GPU that mesa supports (as well as on top of vulkan via zink). In addition to that, projects like chipstar are providing CUDA on top of OpenCL.

    OpenCL 3.0 is a decent API but it fell out of favour because of two factors: 1) a lack of high quality implementations. Clover is absolutely useless, AMD never gives a shit about their drivers least of all openCL, and nvidia only cares about CUDA. 2) the CUDA environment and the massive amount of monetary investment nvidia has injected into every part of the ecosystem. for years and years nvidia has poured money all over everything that supports CUDA.

    Now with a solid supports-every-driver OpenCL implementation from rusticl and people feeling the bite of being locked into nvidia's ecosystem (driving real, painful costs for everyone doing AI rollout) there is a real need and demand for OpenCL. It hasn't hit the "old reliable" ML frameworks like tensorflow or pytorch, but every new LLM now has an implementation that can run on OpenCL, either in it's canonical implementation or a fork/alternative.

    In the near future I'm going to be exclusively looking at frameworks that run on OpenCL, even though I might have an nvidia GPU, because I don't want to deal with nvidia's driver and would rather run NVK. And if AMD produces a nice card it gives me the freedom to run the same crap on RADV. Either way, I can do OpenCL -> Rusticl -> Zink -> Vulkan.
    Last edited by Developer12; 28 October 2024, 06:26 PM.

    Leave a comment:


  • Ladis
    replied
    Originally posted by pong View Post

    If I were writing an inference engine I'd have started with supporting OpenCL / Vulkan / SYCL / maybe OpenACC or some such and then added on specializations for optimization of particular platforms, CUDA, HIP, etc. as optimizations where relevant to get something that's really "runs well enough most everywhere" as a primary consideration vs. just making yet another inference engine that only runs great on CUDA and marginally on most other things.
    But you are not writing it, other people do. For them, OpenCL ended (Apple created it and Apple removed it from its OS for new apps, NVidia never fully supported it, AMD had always a buggy driver, to the level they told Blender devs to remove support), Vulkan is not for more complex computations (e.g. limit to 4 GB of continuous data) and SYSL & OpenACC are not known, probably not developed for years and surely having the limits of the common denominator backend API(s). On the other end, CUDA is well available (most of dGPUs are NVidia) and AMD supports it, officially via HIP (recompile the source), unofficially via ZLUDA (works even for binary).

    Leave a comment:


  • pong
    replied
    Originally posted by Developer12 View Post

    You're right, they'll just outright install linux.

    And yes, this project does support all of them. Even R600g is supported.

    If you want an example of modern software supporting OpenCL, look no further than llamacpp. It's the most cutting-edge ML implementation out there and it has native OpenCL support out of the box.
    I absolutely agree with your main premise that some / more / many "compute" application projects should support more portable / platform agnostic backend compute frameworks e.g. OpenCL, Vulkan, SYCL, OpenACC, OpenMP.

    Though my (hazy) recollection is that you might (perhaps I misunderstood the details) be wrong about the llamacpp OpenCL support. IIRC they "used to" support CLBLAST or some such thing for a backend acceleration system using OpenCL via that particular BLAS library but I could swear I saw that it was deprecated or removed or something. Maybe there's some other mechanism to run llama.cpp via OpenCL that I don't know of. But as far as I have been able to discern pretty much "all" the non-CUDA based "backends" are really getting only very much second class / marginal support for "inference features" vs the CUDA backend. e.g. not all kinds of quantizations are supported for non-CUDA backends, etc.

    I see "BLAS" and "BLIS" for backends in the main README but nothing else that to me implies "OpenCL" currently.

    LLM inference in C/C++. Contribute to ggerganov/llama.cpp development by creating an account on GitHub.


    If I were writing an inference engine I'd have started with supporting OpenCL / Vulkan / SYCL / maybe OpenACC or some such and then added on specializations for optimization of particular platforms, CUDA, HIP, etc. as optimizations where relevant to get something that's really "runs well enough most everywhere" as a primary consideration vs. just making yet another inference engine that only runs great on CUDA and marginally on most other things.



    Leave a comment:


  • Developer12
    replied
    Originally posted by Ladis View Post

    99.9% of users will not dualboot Linux on Mac for OpenCL . Also the opensource drivers in Linux are of various backend, it doesn't mean this project will support all of them.
    You're right, they'll just outright install linux.

    And yes, this project does support all of them. Even R600g is supported.

    If you want an example of modern software supporting OpenCL, look no further than llamacpp. It's the most cutting-edge ML implementation out there and it has native OpenCL support out of the box.

    Leave a comment:


  • Ladis
    replied
    Originally posted by Developer12 View Post

    Did you not read this article at all? Rusticl is an in-mesa OpenCL implementation that is conformant for OpenCL 3.0 and many extensions. It runs on top of many of the GPU drivers in mesa, with the goal of supporting every GPU vendor.

    Hell, one of the drivers it's enabled for by default in MESA is the new linux M1+ GPU driver.
    99.9% of users will not dualboot Linux on Mac for OpenCL . Also the opensource drivers in Linux are of various backend, it doesn't mean this project will support all of them.

    Leave a comment:


  • Developer12
    replied
    Originally posted by Ladis View Post

    But OpenCL ended (Apple created it, but doesn't support it in the native apps already, only in the legacy x86 emulation; AMD never had it working for more complex GPU programs, e.g. Blender devs had to split code into smaller ones, and NVidia supported only a very old version for a long time) and other projects leave it (Blender, ...). I don't say not to maintain an existing code, but to write a new one?
    Did you not read this article at all? Rusticl is an in-mesa OpenCL implementation that is conformant for OpenCL 3.0 and many extensions. It runs on top of many of the GPU drivers in mesa, with the goal of supporting every GPU vendor.

    Hell, one of the drivers it's enabled for by default in MESA is the new linux M1+ GPU driver.

    Leave a comment:


  • Ladis
    replied
    Originally posted by Developer12 View Post
    I really wish the pytorch people would get off their asses and build a real OpenCL backend. They have a CUDA backend and that is literally it. The only reason they run on HIP/ROCm at all is because of AMD's "I can't believe it's not cuda" translation layer. Meanwhile they have a very lackluster vulkan backend that's inference-only and incomplete.
    But OpenCL ended (Apple created it, but doesn't support it in the native apps already, only in the legacy x86 emulation; AMD never had it working for more complex GPU programs, e.g. Blender devs had to split code into smaller ones, and NVidia supported only a very old version for a long time) and other projects leave it (Blender, ...). I don't say not to maintain an existing code, but to write a new one?
    Last edited by Ladis; 18 October 2024, 06:53 PM.

    Leave a comment:


  • Developer12
    replied
    I really wish the pytorch people would get off their asses and build a real OpenCL backend. They have a CUDA backend and that is literally it. The only reason they run on HIP/ROCm at all is because of AMD's "I can't believe it's not cuda" translation layer. Meanwhile they have a very lackluster vulkan backend that's inference-only and incomplete.

    Leave a comment:

Working...
X