Логотип exploitDog
Консоль
Логотип exploitDog

exploitDog

redhat логотип

CVE-2025-48942

Опубликовано: 30 мая 2025
Источник: redhat
CVSS3: 4.3
EPSS Низкий

Описание

vLLM is an inference and serving engine for large language models (LLMs). In versions 0.8.0 up to but excluding 0.9.0, hitting the /v1/completions API with a invalid json_schema as a Guided Param kills the vllm server. This vulnerability is similar GHSA-9hcf-v7m4-6m2j/CVE-2025-48943, but for regex instead of a JSON schema. Version 0.9.0 fixes the issue.

A denial of service flaw was found in vLLM. This flaw allows a remote attacker with access to the /v1/completions endpoint to submit an invalid json_schema as a guided param, which will crash the vLLM instance.

Отчет

The severity of this vulnerability is rated Moderate, as it does not impact system availability. The effects are confined to the application layer, without compromising the underlying system stability.

Меры по смягчению последствий

Mitigation for this issue is either not available or the currently available options do not meet the Red Hat Product Security criteria comprising ease of use and deployment, applicability to widespread installation base or stability.

Затронутые пакеты

ПлатформаПакетСостояниеРекомендацияРелиз
Red Hat AI Inference Serverrhaiis/vllm-cuda-rhel9Fix deferred
Red Hat AI Inference Serverrhaiis/vllm-rocm-rhel9Fix deferred
Red Hat Enterprise Linux AI (RHEL AI)rhelai1/bootc-amd-rhel9Fix deferred
Red Hat Enterprise Linux AI (RHEL AI)rhelai1/bootc-aws-nvidia-rhel9Fix deferred
Red Hat Enterprise Linux AI (RHEL AI)rhelai1/bootc-azure-amd-rhel9Fix deferred
Red Hat Enterprise Linux AI (RHEL AI)rhelai1/bootc-azure-nvidia-rhel9Fix deferred
Red Hat Enterprise Linux AI (RHEL AI)rhelai1/bootc-gcp-nvidia-rhel9Fix deferred
Red Hat Enterprise Linux AI (RHEL AI)rhelai1/bootc-intel-rhel9Fix deferred
Red Hat Enterprise Linux AI (RHEL AI)rhelai1/bootc-nvidia-rhel9Fix deferred
Red Hat Enterprise Linux AI (RHEL AI)rhelai1/instructlab-amd-rhel9Fix deferred

Показывать по

Дополнительная информация

Статус:

Moderate
Дефект:
CWE-248
https://bugzilla.redhat.com/show_bug.cgi?id=2369475vllm: vLLM denial of service via invalid JSON schema

EPSS

Процентиль: 14%
0.00046
Низкий

4.3 Medium

CVSS3

Связанные уязвимости

CVSS3: 6.5
nvd
19 дней назад

vLLM is an inference and serving engine for large language models (LLMs). In versions 0.8.0 up to but excluding 0.9.0, hitting the /v1/completions API with a invalid json_schema as a Guided Param kills the vllm server. This vulnerability is similar GHSA-9hcf-v7m4-6m2j/CVE-2025-48943, but for regex instead of a JSON schema. Version 0.9.0 fixes the issue.

CVSS3: 6.5
debian
19 дней назад

vLLM is an inference and serving engine for large language models (LLM ...

CVSS3: 6.5
github
21 день назад

vLLM DOS: Remotely kill vllm over http with invalid JSON schema

EPSS

Процентиль: 14%
0.00046
Низкий

4.3 Medium

CVSS3