Логотип exploitDog
Консоль
Логотип exploitDog

exploitDog

debian логотип

CVE-2025-62426

Опубликовано: 21 нояб. 2025
Источник: debian

Описание

vLLM is an inference and serving engine for large language models (LLMs). From version 0.5.5 to before 0.11.1, the /v1/chat/completions and /tokenize endpoints allow a chat_template_kwargs request parameter that is used in the code before it is properly validated against the chat template. With the right chat_template_kwargs parameters, it is possible to block processing of the API server for long periods of time, delaying all other requests. This issue has been patched in version 0.11.1.

Пакеты

ПакетСтатусВерсия исправленияРелизТип
vllmitppackage

Связанные уязвимости

CVSS3: 6.5
nvd
3 месяца назад

vLLM is an inference and serving engine for large language models (LLMs). From version 0.5.5 to before 0.11.1, the /v1/chat/completions and /tokenize endpoints allow a chat_template_kwargs request parameter that is used in the code before it is properly validated against the chat template. With the right chat_template_kwargs parameters, it is possible to block processing of the API server for long periods of time, delaying all other requests. This issue has been patched in version 0.11.1.

CVSS3: 6.5
github
3 месяца назад

vLLM vulnerable to DoS via large Chat Completion or Tokenization requests with specially crafted `chat_template_kwargs`

CVSS3: 6.5
fstec
3 месяца назад

Уязвимость библиотеки для работы с большими языковыми моделями (LLM) vLLM, связанная с неограниченным распределением ресурсов, позволяющая нарушителю вызвать отказ в обслуживании