Логотип exploitDog
Консоль
Логотип exploitDog

exploitDog

debian логотип

CVE-2026-22807

Опубликовано: 21 янв. 2026
Источник: debian

Описание

vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.14.0, vLLM loads Hugging Face `auto_map` dynamic modules during model resolution without gating on `trust_remote_code`, allowing attacker-controlled Python code in a model repo/path to execute at server startup. An attacker who can influence the model repo/path (local directory or remote Hugging Face repo) can achieve arbitrary code execution on the vLLM host during model load. This happens before any request handling and does not require API access. Version 0.14.0 fixes the issue.

Пакеты

ПакетСтатусВерсия исправленияРелизТип
vllmitppackage

Связанные уязвимости

CVSS3: 8.8
nvd
17 дней назад

vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.14.0, vLLM loads Hugging Face `auto_map` dynamic modules during model resolution without gating on `trust_remote_code`, allowing attacker-controlled Python code in a model repo/path to execute at server startup. An attacker who can influence the model repo/path (local directory or remote Hugging Face repo) can achieve arbitrary code execution on the vLLM host during model load. This happens before any request handling and does not require API access. Version 0.14.0 fixes the issue.

CVSS3: 8.8
github
17 дней назад

vLLM affected by RCE via auto_map dynamic module loading during model initialization