Логотип exploitDog
bind:CVE-2025-62426
Консоль
Логотип exploitDog

exploitDog

bind:CVE-2025-62426

Количество 5

Количество 5

redhat логотип

CVE-2025-62426

5 месяцев назад

vLLM is an inference and serving engine for large language models (LLMs). From version 0.5.5 to before 0.11.1, the /v1/chat/completions and /tokenize endpoints allow a chat_template_kwargs request parameter that is used in the code before it is properly validated against the chat template. With the right chat_template_kwargs parameters, it is possible to block processing of the API server for long periods of time, delaying all other requests. This issue has been patched in version 0.11.1.

CVSS3: 6.5
EPSS: Низкий
nvd логотип

CVE-2025-62426

5 месяцев назад

vLLM is an inference and serving engine for large language models (LLMs). From version 0.5.5 to before 0.11.1, the /v1/chat/completions and /tokenize endpoints allow a chat_template_kwargs request parameter that is used in the code before it is properly validated against the chat template. With the right chat_template_kwargs parameters, it is possible to block processing of the API server for long periods of time, delaying all other requests. This issue has been patched in version 0.11.1.

CVSS3: 6.5
EPSS: Низкий
debian логотип

CVE-2025-62426

5 месяцев назад

vLLM is an inference and serving engine for large language models (LLM ...

CVSS3: 6.5
EPSS: Низкий
github логотип

GHSA-69j4-grxj-j64p

5 месяцев назад

vLLM vulnerable to DoS via large Chat Completion or Tokenization requests with specially crafted `chat_template_kwargs`

CVSS3: 6.5
EPSS: Низкий
fstec логотип

BDU:2025-14680

5 месяцев назад

Уязвимость библиотеки для работы с большими языковыми моделями (LLM) vLLM, связанная с неограниченным распределением ресурсов, позволяющая нарушителю вызвать отказ в обслуживании

CVSS3: 6.5
EPSS: Низкий

Уязвимостей на страницу

Уязвимость
CVSS
EPSS
Опубликовано
redhat логотип
CVE-2025-62426

vLLM is an inference and serving engine for large language models (LLMs). From version 0.5.5 to before 0.11.1, the /v1/chat/completions and /tokenize endpoints allow a chat_template_kwargs request parameter that is used in the code before it is properly validated against the chat template. With the right chat_template_kwargs parameters, it is possible to block processing of the API server for long periods of time, delaying all other requests. This issue has been patched in version 0.11.1.

CVSS3: 6.5
0%
Низкий
5 месяцев назад
nvd логотип
CVE-2025-62426

vLLM is an inference and serving engine for large language models (LLMs). From version 0.5.5 to before 0.11.1, the /v1/chat/completions and /tokenize endpoints allow a chat_template_kwargs request parameter that is used in the code before it is properly validated against the chat template. With the right chat_template_kwargs parameters, it is possible to block processing of the API server for long periods of time, delaying all other requests. This issue has been patched in version 0.11.1.

CVSS3: 6.5
0%
Низкий
5 месяцев назад
debian логотип
CVE-2025-62426

vLLM is an inference and serving engine for large language models (LLM ...

CVSS3: 6.5
0%
Низкий
5 месяцев назад
github логотип
GHSA-69j4-grxj-j64p

vLLM vulnerable to DoS via large Chat Completion or Tokenization requests with specially crafted `chat_template_kwargs`

CVSS3: 6.5
0%
Низкий
5 месяцев назад
fstec логотип
BDU:2025-14680

Уязвимость библиотеки для работы с большими языковыми моделями (LLM) vLLM, связанная с неограниченным распределением ресурсов, позволяющая нарушителю вызвать отказ в обслуживании

CVSS3: 6.5
0%
Низкий
5 месяцев назад

Уязвимостей на страницу