vLLM is an inference and serving engine for large language models (LLMs). In versions from 0.6.4 to before 0.12.0, users can crash the vLLM engine serving multimodal models that use the Idefics3 vision model implementation by sending a specially crafted 1x1 pixel image. This causes a tensor dimension mismatch that results in an unhandled runtime error, leading to complete server termination. This issue has been patched in version 0.12.0.
References
| Link | Resource |
|---|---|
| https://github.com/vllm-project/vllm/security/advisories/GHSA-grg2-63fw-f2qr | Exploit Vendor Advisory |
Configurations
History
No history.
Information
Published : 2026-01-10 07:16
Updated : 2026-01-27 21:03
NVD link : CVE-2026-22773
Mitre link : CVE-2026-22773
CVE.ORG link : CVE-2026-22773
JSON object : View
Products Affected
vllm
- vllm
CWE
CWE-770
Allocation of Resources Without Limits or Throttling
