Intel LLM Scaler vLLM Update Supports More Models

Written by Michael Larabel in Intel on 25 November 2025 at 06:13 AM EST. Add A Comment
INTEL
Intel software engineers continue to be hard at work on LLM-Scaler as their solution for running vLLM on Intel GPUs in a Docker containerized environment. A new beta release of LLM-Scaler built around vLLM was released overnight with support for running more large language models.

Since the "LLM-Scaler 1.0" debut of the project back in August there have been frequent updates for expanding LLM coverage on Intel GPUs and exposing more features for harnessing the AI compute power on Intel graphics hardware. The versioning scheme though remains a mess with today's test version being "llm-scaler-vllm beta release 0.10.2-b6" even with "1.0" previously being announced.

Intel Arc Pro B50


As for the changes with today's llm-scaler-vllm beta update, they include:
- MoE-Int4 support for Qwen3-30B-A3B
- Bpe-Qwen tokenizer support
- Enable Qwen3-VL Dense/MoE models
- Enable Qwen3-Omni models
- MinerU 2.5 Support
- Enable whisper transcription models
- Fix minicpmv4.5 OOM issue and output error
- Enable ERNIE-4.5-vl models
- Enable Glyph based GLM-4.1V-9B-Base

This new beta update for those interested in using vLLM on Intel GPUs via this Docker environment can find all the details on GitHub. The Docker image is available via intel/llm-scaler-vllm:0.10.2-b6.
Related News
About The Author

Michael Larabel is the principal author of Phoronix.com and founded the site in 2004 with a focus on enriching the Linux hardware experience. Michael has written more than 20,000 articles covering the state of Linux hardware support, Linux performance, graphics drivers, and other topics. Michael is also the lead developer of the Phoronix Test Suite, Phoromatic, and OpenBenchmarking.org automated benchmarking software. He can be followed via Twitter, LinkedIn, or contacted via MichaelLarabel.com.

Popular News This Week