Описание
vLLM is an inference and serving engine for large language models (LLMs). Before version 0.11.0rc2, the API key support in vLLM performs validation using a method that was vulnerable to a timing attack. API key validation uses a string comparison that takes longer the more characters the provided API key gets correct. Data analysis across many attempts could allow an attacker to determine when it finds the next correct character in the key sequence. Deployments relying on vLLM's built-in API key validation are vulnerable to authentication bypass using this technique. Version 0.11.0rc2 fixes the issue.
A flaw was found in vLLM’s API token authentication logic, where token comparisons were not performed in constant time. This weakness could allow an attacker to exploit timing differences to guess valid tokens and bypass authentication.
Отчет
The RedHat security team has rated the severity of this issue as Important. The vulnerability is remotely exploitable without authentication or user interaction and can result in authentication bypass. The root cause was the use of a non-constant-time string comparison, which leaked timing information. Successful exploitation could lead to unauthorized access to APIs and sensitive resources.
Меры по смягчению последствий
Mitigation for this issue is either not available or the currently available options don't meet the Red Hat Product Security criteria comprising ease of use and deployment, applicability to widespread installation base or stability.
Затронутые пакеты
| Платформа | Пакет | Состояние | Рекомендация | Релиз |
|---|---|---|---|---|
| Red Hat AI Inference Server | rhaiis/vllm-spyre-rhel9 | Will not fix | ||
| Red Hat AI Inference Server | rhaiis/vllm-tpu-rhel9 | Will not fix | ||
| Red Hat Enterprise Linux AI (RHEL AI) | rhelai1/bootc-amd-rhel9 | Will not fix | ||
| Red Hat Enterprise Linux AI (RHEL AI) | rhelai1/bootc-aws-nvidia-rhel9 | Will not fix | ||
| Red Hat Enterprise Linux AI (RHEL AI) | rhelai1/bootc-azure-amd-rhel9 | Will not fix | ||
| Red Hat Enterprise Linux AI (RHEL AI) | rhelai1/bootc-azure-nvidia-rhel9 | Will not fix | ||
| Red Hat Enterprise Linux AI (RHEL AI) | rhelai1/bootc-gcp-nvidia-rhel9 | Will not fix | ||
| Red Hat Enterprise Linux AI (RHEL AI) | rhelai1/bootc-intel-rhel9 | Will not fix | ||
| Red Hat Enterprise Linux AI (RHEL AI) | rhelai1/bootc-nvidia-rhel9 | Will not fix | ||
| Red Hat Enterprise Linux AI (RHEL AI) | rhelai1/instructlab-amd-rhel9 | Will not fix |
Показывать по
Дополнительная информация
Статус:
7.5 High
CVSS3
Связанные уязвимости
vLLM is an inference and serving engine for large language models (LLMs). Before version 0.11.0rc2, the API key support in vLLM performs validation using a method that was vulnerable to a timing attack. API key validation uses a string comparison that takes longer the more characters the provided API key gets correct. Data analysis across many attempts could allow an attacker to determine when it finds the next correct character in the key sequence. Deployments relying on vLLM's built-in API key validation are vulnerable to authentication bypass using this technique. Version 0.11.0rc2 fixes the issue.
vLLM is an inference and serving engine for large language models (LLM ...
vLLM is vulnerable to timing attack at bearer auth
Уязвимость класса AuthenticationMiddleware библиотеки для работы с большими языковыми моделями (LLM) vLLM, позволяющая нарушителю обойти ограничения безопасности и раскрыть защищаемую информацию
7.5 High
CVSS3