Количество 3
Количество 3
CVE-2025-66448
vLLM is an inference and serving engine for large language models (LLMs). Prior to 0.11.1, vllm has a critical remote code execution vector in a config class named Nemotron_Nano_VL_Config. When vllm loads a model config that contains an auto_map entry, the config class resolves that mapping with get_class_from_dynamic_module(...) and immediately instantiates the returned class. This fetches and executes Python from the remote repository referenced in the auto_map string. Crucially, this happens even when the caller explicitly sets trust_remote_code=False in vllm.transformers_utils.config.get_config. In practice, an attacker can publish a benign-looking frontend repo whose config.json points via auto_map to a separate malicious backend repo; loading the frontend will silently run the backend’s code on the victim host. This vulnerability is fixed in 0.11.1.
CVE-2025-66448
vLLM is an inference and serving engine for large language models (LLM ...
GHSA-8fr4-5q9j-m8gm
vLLM vulnerable to remote code execution via transformers_utils/get_config
Уязвимостей на страницу
Уязвимость | CVSS | EPSS | Опубликовано | |
|---|---|---|---|---|
CVE-2025-66448 vLLM is an inference and serving engine for large language models (LLMs). Prior to 0.11.1, vllm has a critical remote code execution vector in a config class named Nemotron_Nano_VL_Config. When vllm loads a model config that contains an auto_map entry, the config class resolves that mapping with get_class_from_dynamic_module(...) and immediately instantiates the returned class. This fetches and executes Python from the remote repository referenced in the auto_map string. Crucially, this happens even when the caller explicitly sets trust_remote_code=False in vllm.transformers_utils.config.get_config. In practice, an attacker can publish a benign-looking frontend repo whose config.json points via auto_map to a separate malicious backend repo; loading the frontend will silently run the backend’s code on the victim host. This vulnerability is fixed in 0.11.1. | CVSS3: 7.1 | 0% Низкий | 2 месяца назад | |
CVE-2025-66448 vLLM is an inference and serving engine for large language models (LLM ... | CVSS3: 7.1 | 0% Низкий | 2 месяца назад | |
GHSA-8fr4-5q9j-m8gm vLLM vulnerable to remote code execution via transformers_utils/get_config | CVSS3: 7.1 | 0% Низкий | 2 месяца назад |
Уязвимостей на страницу