Announcement

Collapse
No announcement yet.

Intel Compute Runtime 23.26.26690.22 Is A Big Update For Intel's OpenCL/L0 Stack

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel Compute Runtime 23.26.26690.22 Is A Big Update For Intel's OpenCL/L0 Stack

    Phoronix: Intel Compute Runtime 23.26.26690.22 Is A Big Update For Intel's OpenCL/L0 Stack

    Intel's open-source Compute Runtime 23.26.26690.22 was released today, which is a big update for this OpenCL and oneAPI Level Zero (L0) stack for Windows and Linux systems. Due to the summer holidays and Intel's current release regiment for the Compute Runtime, v23.26.26690.22 is the first new release since mid-July...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    How easy is this to deploy compared to Nvidia's CUDA? Generally CUDA tends to "just work" for devs and end users unless you have a really screwy setup. (No, ZLUDA isn't CUDA and it's not a drop in replacement.) I'm more concerned about how easy it is to use it than I am performance right now. Assuming Intel stays the course with its discrete GPU line, I'll wait and see what happens with performance with the next generation or two. Intel's traditional half-assed driver support outside of its core (little "c") CPU lines has me wary about investing in the dGPU product line.

    Comment


    • #3
      Originally posted by stormcrow View Post
      How easy is this to deploy compared to Nvidia's CUDA? Generally CUDA tends to "just work" for devs and end users unless you have a really screwy setup.
      My experience is that, once you get the necessary oneAPI and compute runtime packages installed, it "just works". No different to CUDA, which also requires separate package installation for compute vs. graphics.

      So far, my experience is mainly just running OpenVINO on it, but we shipped that in a commercial product.

      Comment


      • #4
        I think it is in a pretty good shape now, although I'll admit I played only with pytorch toy examples and one or two stable diffusion models from hugging face. I am using Debian bookworm ( with backported 6.4 kernel, stock kernel 6.1 even with i915.force_probe is not sufficient for compute). Then most of compute stack needs to be taken from upstream.
        • intel compute runtime: deb packages are available on Intel's GitHub webpage
        • intel-basekit can be installed from apt, after adding sources from Intel's oneAPI website
        • Intel extensions for pytorch (GitHub) recently released support for torch 2.0 hence python 3.11; there is one line pip command on the front page
        • And I think one package needs adding (apt install libze1), not sure why it does not come with compute runtime above
        Then one needs to source oneAPI and intel extension for pytorch virtualenv, and they are good to go. These are probably still too many steps, but that makes Pytorch work on Intel dGPU.

        For OpenCL only Intel compute runtime needs to be installed.

        Comment

        Working...
        X