Логотип exploitDog
Консоль
Логотип exploitDog

exploitDog

debian логотип

CVE-2025-46570

Опубликовано: 29 мая 2025
Источник: debian

Описание

vLLM is an inference and serving engine for large language models (LLMs). Prior to version 0.9.0, when a new prompt is processed, if the PageAttention mechanism finds a matching prefix chunk, the prefill process speeds up, which is reflected in the TTFT (Time to First Token). These timing differences caused by matching chunks are significant enough to be recognized and exploited. This issue has been patched in version 0.9.0.

Пакеты

ПакетСтатусВерсия исправленияРелизТип
vllmitppackage

Связанные уязвимости

CVSS3: 2.6
redhat
10 месяцев назад

vLLM is an inference and serving engine for large language models (LLMs). Prior to version 0.9.0, when a new prompt is processed, if the PageAttention mechanism finds a matching prefix chunk, the prefill process speeds up, which is reflected in the TTFT (Time to First Token). These timing differences caused by matching chunks are significant enough to be recognized and exploited. This issue has been patched in version 0.9.0.

CVSS3: 2.6
nvd
10 месяцев назад

vLLM is an inference and serving engine for large language models (LLMs). Prior to version 0.9.0, when a new prompt is processed, if the PageAttention mechanism finds a matching prefix chunk, the prefill process speeds up, which is reflected in the TTFT (Time to First Token). These timing differences caused by matching chunks are significant enough to be recognized and exploited. This issue has been patched in version 0.9.0.

CVSS3: 2.6
github
10 месяцев назад

Potential Timing Side-Channel Vulnerability in vLLM’s Chunk-Based Prefix Caching

CVSS3: 2.6
fstec
11 месяцев назад

Уязвимость механизма PageAttention библиотеки для работы с большими языковыми моделями (LLM) vLLM, позволяющая нарушителю раскрыть защищаемую информацию