Описание
vLLM is an inference and serving engine for large language models (LLMs). In versions from 0.6.4 to before 0.12.0, users can crash the vLLM engine serving multimodal models that use the Idefics3 vision model implementation by sending a specially crafted 1x1 pixel image. This causes a tensor dimension mismatch that results in an unhandled runtime error, leading to complete server termination. This issue has been patched in version 0.12.0.
A flaw was found in vLLM, an inference and serving engine for large language models (LLMs). A remote attacker can exploit this vulnerability by sending a specially crafted 1x1 pixel image to a vLLM engine serving multimodal models that use the Idefics3 vision model implementation. This leads to a tensor dimension mismatch, causing an unhandled runtime error and resulting in complete server termination, effectively a Denial of Service (DoS).
Отчет
This vulnerability is rated Moderate for Red Hat products. A remote attacker can trigger a denial of service in vLLM engines serving multimodal models that use the Idefics3 vision model by sending a specially crafted image, leading to complete server termination. This affects Red Hat AI Inference Server and Red Hat OpenShift AI (RHOAI) when configured with the vulnerable vLLM versions.
Меры по смягчению последствий
Mitigation for this issue is either not available or the currently available options do not meet the Red Hat Product Security criteria comprising ease of use and deployment, applicability to widespread installation base, or stability.
Затронутые пакеты
| Платформа | Пакет | Состояние | Рекомендация | Релиз |
|---|---|---|---|---|
| Red Hat AI Inference Server | rhaiis/vllm-spyre-rhel9 | Affected | ||
| Red Hat AI Inference Server | rhaiis/vllm-tpu-rhel9 | Affected | ||
| Red Hat Enterprise Linux AI (RHEL AI) 3 | rhelai3/bootc-aws-cuda-rhel9 | Out of support scope | ||
| Red Hat Enterprise Linux AI (RHEL AI) 3 | rhelai3/bootc-azure-cuda-rhel9 | Out of support scope | ||
| Red Hat Enterprise Linux AI (RHEL AI) 3 | rhelai3/bootc-cuda-rhel9 | Out of support scope | ||
| Red Hat Enterprise Linux AI (RHEL AI) 3 | rhelai3/bootc-gcp-cuda-rhel9 | Out of support scope | ||
| Red Hat OpenShift AI (RHOAI) | rhoai/odh-kserve-agent-rhel9 | Out of support scope | ||
| Red Hat OpenShift AI (RHOAI) | rhoai/odh-kserve-controller-rhel9 | Out of support scope | ||
| Red Hat OpenShift AI (RHOAI) | rhoai/odh-kserve-router-rhel9 | Out of support scope | ||
| Red Hat OpenShift AI (RHOAI) | rhoai/odh-kserve-storage-initializer-rhel9 | Out of support scope |
Показывать по
Дополнительная информация
Статус:
6.5 Medium
CVSS3
Связанные уязвимости
vLLM is an inference and serving engine for large language models (LLMs). In versions from 0.6.4 to before 0.12.0, users can crash the vLLM engine serving multimodal models that use the Idefics3 vision model implementation by sending a specially crafted 1x1 pixel image. This causes a tensor dimension mismatch that results in an unhandled runtime error, leading to complete server termination. This issue has been patched in version 0.12.0.
vLLM is an inference and serving engine for large language models (LLM ...
vLLM is vulnerable to DoS in Idefics3 vision models via image payload with ambiguous dimensions
6.5 Medium
CVSS3