Описание
vLLM is an inference and serving engine for large language models (LLMs). The SSRF protection fix for CVE-2026-24779 add in 0.15.1 can be bypassed in the load_from_url_async method due to inconsistent URL parsing behavior between the validation layer and the actual HTTP client. The SSRF fix uses urllib3.util.parse_url() to validate and extract the hostname from user-provided URLs. However, load_from_url_async uses aiohttp for making the actual HTTP requests, and aiohttp internally uses the yarl library for URL parsing. This vulnerability in 0.17.0.
A flaw was found in vLLM, an inference and serving engine for large language models (LLMs). A remote attacker can exploit this Server-Side Request Forgery (SSRF) bypass vulnerability in the load_from_url_async method. The flaw occurs because the URL validation and the actual HTTP request handling use different parsing libraries, leading to inconsistencies. This allows an attacker to bypass existing SSRF protections, potentially leading to the disclosure of sensitive information from internal network resources.
Отчет
This is an IMPORTANT vulnerability. The vLLM component, as used in Red Hat AI Inference Server and Red Hat OpenShift AI, is affected by a Server-Side Request Forgery (SSRF) protection bypass. This flaw allows an attacker to circumvent existing SSRF safeguards due to inconsistent URL parsing between the validation layer and the underlying HTTP client when the load_from_url_async method is utilized.
Меры по смягчению последствий
Mitigation for this issue is either not available or the currently available options do not meet the Red Hat Product Security criteria comprising ease of use and deployment, applicability to widespread installation base, or stability.
Затронутые пакеты
| Платформа | Пакет | Состояние | Рекомендация | Релиз |
|---|---|---|---|---|
| Red Hat AI Inference Server | rhaiis-preview/vllm-cuda-rhel9 | Affected | ||
| Red Hat AI Inference Server | rhaiis/vllm-cuda-rhel9 | Affected | ||
| Red Hat AI Inference Server | rhaiis/vllm-rocm-rhel9 | Affected | ||
| Red Hat AI Inference Server | rhaiis/vllm-spyre-rhel9 | Affected | ||
| Red Hat AI Inference Server | rhaiis/vllm-tpu-rhel9 | Affected | ||
| Red Hat Enterprise Linux AI (RHEL AI) 3 | rhelai3/bootc-aws-cuda-rhel9 | Affected | ||
| Red Hat Enterprise Linux AI (RHEL AI) 3 | rhelai3/bootc-azure-cuda-rhel9 | Affected | ||
| Red Hat Enterprise Linux AI (RHEL AI) 3 | rhelai3/bootc-cuda-rhel9 | Affected | ||
| Red Hat Enterprise Linux AI (RHEL AI) 3 | rhelai3/bootc-gcp-cuda-rhel9 | Affected | ||
| Red Hat OpenShift AI (RHOAI) | rhoai/odh-kserve-agent-rhel9 | Out of support scope |
Показывать по
Ссылки на источники
Дополнительная информация
Статус:
EPSS
7.1 High
CVSS3
Связанные уязвимости
vLLM is an inference and serving engine for large language models (LLMs). The SSRF protection fix for CVE-2026-24779 add in 0.15.1 can be bypassed in the load_from_url_async method due to inconsistent URL parsing behavior between the validation layer and the actual HTTP client. The SSRF fix uses urllib3.util.parse_url() to validate and extract the hostname from user-provided URLs. However, load_from_url_async uses aiohttp for making the actual HTTP requests, and aiohttp internally uses the yarl library for URL parsing. This vulnerability in 0.17.0.
vLLM is an inference and serving engine for large language models (LLM ...
EPSS
7.1 High
CVSS3