Описание
vLLM is an inference and serving engine for large language models (LLMs). Prior to version 0.9.0, when a new prompt is processed, if the PageAttention mechanism finds a matching prefix chunk, the prefill process speeds up, which is reflected in the TTFT (Time to First Token). These timing differences caused by matching chunks are significant enough to be recognized and exploited. This issue has been patched in version 0.9.0.
A timing discrepancy flaw was found in vLLM, where a prefix match on a user prompt can reveal other user prompts. An attacker must have user-level access to the vLLM instance to exploit this vulnerabi
Меры по смягчению последствий
Mitigation for this issue is either not available or the currently available options do not meet the Red Hat Product Security criteria comprising ease of use and deployment, applicability to widespread installation base or stability.
Затронутые пакеты
Платформа | Пакет | Состояние | Рекомендация | Релиз |
---|---|---|---|---|
Red Hat AI Inference Server | rhaiis/vllm-cuda-rhel9 | Fix deferred | ||
Red Hat AI Inference Server | rhaiis/vllm-rocm-rhel9 | Fix deferred | ||
Red Hat Enterprise Linux AI (RHEL AI) | rhelai1/bootc-amd-rhel9 | Fix deferred | ||
Red Hat Enterprise Linux AI (RHEL AI) | rhelai1/bootc-aws-nvidia-rhel9 | Fix deferred | ||
Red Hat Enterprise Linux AI (RHEL AI) | rhelai1/bootc-azure-amd-rhel9 | Fix deferred | ||
Red Hat Enterprise Linux AI (RHEL AI) | rhelai1/bootc-azure-nvidia-rhel9 | Fix deferred | ||
Red Hat Enterprise Linux AI (RHEL AI) | rhelai1/bootc-gcp-nvidia-rhel9 | Fix deferred | ||
Red Hat Enterprise Linux AI (RHEL AI) | rhelai1/bootc-intel-rhel9 | Fix deferred | ||
Red Hat Enterprise Linux AI (RHEL AI) | rhelai1/bootc-nvidia-rhel9 | Fix deferred | ||
Red Hat Enterprise Linux AI (RHEL AI) | rhelai1/instructlab-amd-rhel9 | Fix deferred |
Показывать по
Ссылки на источники
Дополнительная информация
Статус:
EPSS
2.6 Low
CVSS3
Связанные уязвимости
vLLM is an inference and serving engine for large language models (LLMs). Prior to version 0.9.0, when a new prompt is processed, if the PageAttention mechanism finds a matching prefix chunk, the prefill process speeds up, which is reflected in the TTFT (Time to First Token). These timing differences caused by matching chunks are significant enough to be recognized and exploited. This issue has been patched in version 0.9.0.
vLLM is an inference and serving engine for large language models (LLM ...
Potential Timing Side-Channel Vulnerability in vLLM’s Chunk-Based Prefix Caching
EPSS
2.6 Low
CVSS3