Логотип exploitDog
Консоль
Логотип exploitDog

exploitDog

redhat логотип

CVE-2025-66448

Опубликовано: 01 дек. 2025
Источник: redhat
CVSS3: 7.5
EPSS Низкий

Описание

vLLM is an inference and serving engine for large language models (LLMs). Prior to 0.11.1, vllm has a critical remote code execution vector in a config class named Nemotron_Nano_VL_Config. When vllm loads a model config that contains an auto_map entry, the config class resolves that mapping with get_class_from_dynamic_module(...) and immediately instantiates the returned class. This fetches and executes Python from the remote repository referenced in the auto_map string. Crucially, this happens even when the caller explicitly sets trust_remote_code=False in vllm.transformers_utils.config.get_config. In practice, an attacker can publish a benign-looking frontend repo whose config.json points via auto_map to a separate malicious backend repo; loading the frontend will silently run the backend’s code on the victim host. This vulnerability is fixed in 0.11.1.

A remote code execution vulnerability has been identified in vLLM. An attacker can exploit a weakness in the model loading process to silently fetch and run unauthorized, malicious Python code on the host system. This happens because the engine mistakenly executes code from a remote repository referenced in a model's configuration, even when explicit security measures are set to prevent it.

Отчет

This vulnerability is rated Important for Red Hat because vLLM, when deployed in a Red Hat environment, is susceptible to remote code execution. An attacker can craft a malicious model configuration that, when loaded, fetches and executes arbitrary Python code from a remote repository, even if trust_remote_code is explicitly set to False.

Меры по смягчению последствий

Red Hat has investigated whether a possible mitigation exists for this issue, and has not been able to identify a practical example. Please update the affected package as soon as possible.

Затронутые пакеты

ПлатформаПакетСостояниеРекомендацияРелиз
Red Hat AI Inference Serverrhaiis/vllm-spyre-rhel9Will not fix
Red Hat Enterprise Linux AI (RHEL AI)rhelai1/bootc-amd-rhel9Will not fix
Red Hat Enterprise Linux AI (RHEL AI)rhelai1/bootc-aws-nvidia-rhel9Will not fix
Red Hat Enterprise Linux AI (RHEL AI)rhelai1/bootc-azure-amd-rhel9Will not fix
Red Hat Enterprise Linux AI (RHEL AI)rhelai1/bootc-azure-nvidia-rhel9Will not fix
Red Hat Enterprise Linux AI (RHEL AI)rhelai1/bootc-gcp-nvidia-rhel9Will not fix
Red Hat Enterprise Linux AI (RHEL AI)rhelai1/bootc-intel-rhel9Will not fix
Red Hat Enterprise Linux AI (RHEL AI)rhelai1/bootc-nvidia-rhel9Will not fix
Red Hat Enterprise Linux AI (RHEL AI)rhelai1/instructlab-amd-rhel9Will not fix
Red Hat Enterprise Linux AI (RHEL AI)rhelai1/instructlab-intel-rhel9Will not fix

Показывать по

Дополнительная информация

Статус:

Important
Дефект:
CWE-94
https://bugzilla.redhat.com/show_bug.cgi?id=2418152vllm: vLLM: Remote Code Execution via malicious model configuration

EPSS

Процентиль: 52%
0.00291
Низкий

7.5 High

CVSS3

Связанные уязвимости

CVSS3: 7.1
nvd
4 месяца назад

vLLM is an inference and serving engine for large language models (LLMs). Prior to 0.11.1, vllm has a critical remote code execution vector in a config class named Nemotron_Nano_VL_Config. When vllm loads a model config that contains an auto_map entry, the config class resolves that mapping with get_class_from_dynamic_module(...) and immediately instantiates the returned class. This fetches and executes Python from the remote repository referenced in the auto_map string. Crucially, this happens even when the caller explicitly sets trust_remote_code=False in vllm.transformers_utils.config.get_config. In practice, an attacker can publish a benign-looking frontend repo whose config.json points via auto_map to a separate malicious backend repo; loading the frontend will silently run the backend’s code on the victim host. This vulnerability is fixed in 0.11.1.

CVSS3: 7.1
debian
4 месяца назад

vLLM is an inference and serving engine for large language models (LLM ...

CVSS3: 7.1
github
4 месяца назад

vLLM vulnerable to remote code execution via transformers_utils/get_config

EPSS

Процентиль: 52%
0.00291
Низкий

7.5 High

CVSS3