Описание
vLLM is an inference and serving engine for large language models (LLMs). In versions from 0.6.4 to before 0.12.0, users can crash the vLLM engine serving multimodal models that use the Idefics3 vision model implementation by sending a specially crafted 1x1 pixel image. This causes a tensor dimension mismatch that results in an unhandled runtime error, leading to complete server termination. This issue has been patched in version 0.12.0.
Ссылки
- ExploitVendor Advisory
Уязвимые конфигурации
Конфигурация 1Версия от 0.6.4 (включая) до 0.12.0 (исключая)
cpe:2.3:a:vllm:vllm:*:*:*:*:*:*:*:*
EPSS
Процентиль: 14%
0.00047
Низкий
6.5 Medium
CVSS3
7.5 High
CVSS3
Дефекты
CWE-770
Связанные уязвимости
CVSS3: 6.5
debian
29 дней назад
vLLM is an inference and serving engine for large language models (LLM ...
CVSS3: 6.5
github
26 дней назад
vLLM is vulnerable to DoS in Idefics3 vision models via image payload with ambiguous dimensions
EPSS
Процентиль: 14%
0.00047
Низкий
6.5 Medium
CVSS3
7.5 High
CVSS3
Дефекты
CWE-770