Логотип exploitDog
Консоль
Логотип exploitDog

exploitDog

redhat логотип

CVE-2026-24779

Опубликовано: 27 янв. 2026
Источник: redhat
CVSS3: 7.1

Описание

vLLM is an inference and serving engine for large language models (LLMs). Prior to version 0.14.1, a Server-Side Request Forgery (SSRF) vulnerability exists in the MediaConnector class within the vLLM project's multimodal feature set. The load_from_url and load_from_url_async methods obtain and process media from URLs provided by users, using different Python parsing libraries when restricting the target host. These two parsing libraries have different interpretations of backslashes, which allows the host name restriction to be bypassed. This allows an attacker to coerce the vLLM server into making arbitrary requests to internal network resources. This vulnerability is particularly critical in containerized environments like llm-d, where a compromised vLLM pod could be used to scan the internal network, interact with other pods, and potentially cause denial of service or access sensitive data. For example, an attacker could make the vLLM pod send malicious requests to an internal llm-d management endpoint, leading to system instability by falsely reporting metrics like the KV cache state. Version 0.14.1 contains a patch for the issue.

A flaw was found in vLLM, an inference and serving engine for large language models (LLMs). A Server-Side Request Forgery (SSRF) vulnerability exists in the MediaConnector class, specifically within the load_from_url and load_from_url_async methods. An attacker can exploit differing interpretations of backslashes by Python parsing libraries used for host restrictions to bypass these restrictions. This allows the attacker to force the vLLM server to make arbitrary requests to internal network resources, potentially leading to information disclosure, denial of service, or unauthorized access within containerized environments.

Отчет

This IMPORTANT Server-Side Request Forgery (SSRF) vulnerability in vLLM's MediaConnector allows an attacker to bypass host restrictions when processing user-provided URLs. This enables the vLLM server to be coerced into making arbitrary requests to internal network resources. This is critical in containerized deployments, including Red Hat AI Inference Server, Red Hat Enterprise Linux AI, and Red Hat OpenShift AI, where it could facilitate internal network reconnaissance and unauthorized access to other services.

Меры по смягчению последствий

To mitigate this issue, restrict network access to the vLLM service to only trusted clients. Implement strict network segmentation for vLLM pods in containerized environments to limit potential lateral movement. Ensure that vLLM instances are not exposed to untrusted external networks without proper access controls and input validation at the perimeter.

Затронутые пакеты

ПлатформаПакетСостояниеРекомендацияРелиз
Red Hat AI Inference Serverrhaiis/vllm-spyre-rhel9Affected
Red Hat AI Inference Serverrhaiis/vllm-tpu-rhel9Affected
Red Hat Enterprise Linux AI (RHEL AI) 3rhelai3/bootc-aws-cuda-rhel9Affected
Red Hat Enterprise Linux AI (RHEL AI) 3rhelai3/bootc-azure-cuda-rhel9Affected
Red Hat Enterprise Linux AI (RHEL AI) 3rhelai3/bootc-cuda-rhel9Affected
Red Hat Enterprise Linux AI (RHEL AI) 3rhelai3/bootc-gcp-cuda-rhel9Affected
Red Hat OpenShift AI (RHOAI)rhoai/odh-kserve-agent-rhel9Not affected
Red Hat OpenShift AI (RHOAI)rhoai/odh-kserve-controller-rhel9Not affected
Red Hat OpenShift AI (RHOAI)rhoai/odh-kserve-router-rhel9Not affected
Red Hat OpenShift AI (RHOAI)rhoai/odh-kserve-storage-initializer-rhel9Not affected

Показывать по

Дополнительная информация

Статус:

Important
Дефект:
CWE-918
https://bugzilla.redhat.com/show_bug.cgi?id=2433624vLLM: vLLM: Server-Side Request Forgery allows internal network access

7.1 High

CVSS3

Связанные уязвимости

CVSS3: 7.1
nvd
около 2 месяцев назад

vLLM is an inference and serving engine for large language models (LLMs). Prior to version 0.14.1, a Server-Side Request Forgery (SSRF) vulnerability exists in the `MediaConnector` class within the vLLM project's multimodal feature set. The load_from_url and load_from_url_async methods obtain and process media from URLs provided by users, using different Python parsing libraries when restricting the target host. These two parsing libraries have different interpretations of backslashes, which allows the host name restriction to be bypassed. This allows an attacker to coerce the vLLM server into making arbitrary requests to internal network resources. This vulnerability is particularly critical in containerized environments like `llm-d`, where a compromised vLLM pod could be used to scan the internal network, interact with other pods, and potentially cause denial of service or access sensitive data. For example, an attacker could make the vLLM pod send malicious requests to an internal `

CVSS3: 7.1
debian
около 2 месяцев назад

vLLM is an inference and serving engine for large language models (LLM ...

CVSS3: 7.1
github
около 2 месяцев назад

vLLM vulnerable to Server-Side Request Forgery (SSRF) through MediaConnector

7.1 High

CVSS3