Логотип exploitDog
Консоль
Логотип exploitDog

exploitDog

redhat логотип

CVE-2026-25960

Опубликовано: 09 мар. 2026
Источник: redhat
CVSS3: 7.1
EPSS Низкий

Описание

vLLM is an inference and serving engine for large language models (LLMs). The SSRF protection fix for CVE-2026-24779 add in 0.15.1 can be bypassed in the load_from_url_async method due to inconsistent URL parsing behavior between the validation layer and the actual HTTP client. The SSRF fix uses urllib3.util.parse_url() to validate and extract the hostname from user-provided URLs. However, load_from_url_async uses aiohttp for making the actual HTTP requests, and aiohttp internally uses the yarl library for URL parsing. This vulnerability in 0.17.0.

A flaw was found in vLLM, an inference and serving engine for large language models (LLMs). A remote attacker can exploit this Server-Side Request Forgery (SSRF) bypass vulnerability in the load_from_url_async method. The flaw occurs because the URL validation and the actual HTTP request handling use different parsing libraries, leading to inconsistencies. This allows an attacker to bypass existing SSRF protections, potentially leading to the disclosure of sensitive information from internal network resources.

Отчет

This is an IMPORTANT vulnerability. The vLLM component, as used in Red Hat AI Inference Server and Red Hat OpenShift AI, is affected by a Server-Side Request Forgery (SSRF) protection bypass. This flaw allows an attacker to circumvent existing SSRF safeguards due to inconsistent URL parsing between the validation layer and the underlying HTTP client when the load_from_url_async method is utilized.

Меры по смягчению последствий

Mitigation for this issue is either not available or the currently available options do not meet the Red Hat Product Security criteria comprising ease of use and deployment, applicability to widespread installation base, or stability.

Затронутые пакеты

ПлатформаПакетСостояниеРекомендацияРелиз
Red Hat AI Inference Serverrhaiis-preview/vllm-cuda-rhel9Affected
Red Hat AI Inference Serverrhaiis/vllm-cuda-rhel9Affected
Red Hat AI Inference Serverrhaiis/vllm-rocm-rhel9Affected
Red Hat AI Inference Serverrhaiis/vllm-spyre-rhel9Affected
Red Hat AI Inference Serverrhaiis/vllm-tpu-rhel9Affected
Red Hat Enterprise Linux AI (RHEL AI) 3rhelai3/bootc-aws-cuda-rhel9Affected
Red Hat Enterprise Linux AI (RHEL AI) 3rhelai3/bootc-azure-cuda-rhel9Affected
Red Hat Enterprise Linux AI (RHEL AI) 3rhelai3/bootc-cuda-rhel9Affected
Red Hat Enterprise Linux AI (RHEL AI) 3rhelai3/bootc-gcp-cuda-rhel9Affected
Red Hat OpenShift AI (RHOAI)rhoai/odh-kserve-agent-rhel9Out of support scope

Показывать по

Дополнительная информация

Статус:

Important
Дефект:
CWE-474
https://bugzilla.redhat.com/show_bug.cgi?id=2445892vLLM: vLLM: Server-Side Request Forgery bypass via inconsistent URL parsing

EPSS

Процентиль: 5%
0.00021
Низкий

7.1 High

CVSS3

Связанные уязвимости

CVSS3: 7.1
nvd
19 дней назад

vLLM is an inference and serving engine for large language models (LLMs). The SSRF protection fix for CVE-2026-24779 add in 0.15.1 can be bypassed in the load_from_url_async method due to inconsistent URL parsing behavior between the validation layer and the actual HTTP client. The SSRF fix uses urllib3.util.parse_url() to validate and extract the hostname from user-provided URLs. However, load_from_url_async uses aiohttp for making the actual HTTP requests, and aiohttp internally uses the yarl library for URL parsing. This vulnerability in 0.17.0.

CVSS3: 7.1
debian
19 дней назад

vLLM is an inference and serving engine for large language models (LLM ...

CVSS3: 5.4
github
19 дней назад

vLLM has SSRF Protection Bypass

EPSS

Процентиль: 5%
0.00021
Низкий

7.1 High

CVSS3