Описание
vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.18.0, two model implementation files hardcode trust_remote_code=True when loading sub-components, bypassing the user's explicit --trust-remote-code=False security opt-out. This enables remote code execution via malicious model repositories even when the user has explicitly disabled remote code trust. Version 0.18.0 patches the issue.
A flaw was found in vLLM, an inference and serving engine for large language models (LLMs). Two model implementation files hardcode trust_remote_code=True when loading sub-components. This bypasses the user's explicit --trust-remote-code=False security opt-out, allowing a remote attacker to achieve remote code execution through malicious model repositories.
Отчет
This is an Important vulnerability in vLLM, as shipped in Red Hat AI Inference Server and Red Hat OpenShift AI. The flaw allows remote code execution due to vLLM hardcoding trust_remote_code=True when loading sub-components, which bypasses the user's explicit --trust-remote-code=False security opt-out. This can lead to exploitation through malicious model repositories.
Меры по смягчению последствий
Mitigation for this issue is either not available or the currently available options do not meet the Red Hat Product Security criteria comprising ease of use and deployment, applicability to widespread installation base, or stability.
Затронутые пакеты
| Платформа | Пакет | Состояние | Рекомендация | Релиз |
|---|---|---|---|---|
| Red Hat AI Inference Server | rhaiis-preview/vllm-cuda-rhel9 | Affected | ||
| Red Hat AI Inference Server | rhaiis/vllm-cpu-rhel9 | Affected | ||
| Red Hat AI Inference Server | rhaiis/vllm-cuda-rhel9 | Affected | ||
| Red Hat AI Inference Server | rhaiis/vllm-neuron-rhel9 | Affected | ||
| Red Hat AI Inference Server | rhaiis/vllm-rocm-rhel9 | Affected | ||
| Red Hat AI Inference Server | rhaiis/vllm-spyre-rhel9 | Affected | ||
| Red Hat AI Inference Server | rhaiis/vllm-tpu-rhel9 | Affected | ||
| Red Hat Enterprise Linux AI (RHEL AI) 3 | rhelai3/bootc-aws-cuda-rhel9 | Affected | ||
| Red Hat Enterprise Linux AI (RHEL AI) 3 | rhelai3/bootc-azure-cuda-rhel9 | Affected | ||
| Red Hat Enterprise Linux AI (RHEL AI) 3 | rhelai3/bootc-azure-rocm-rhel9 | Affected |
Показывать по
Ссылки на источники
Дополнительная информация
Статус:
8.8 High
CVSS3
Связанные уязвимости
vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.18.0, two model implementation files hardcode `trust_remote_code=True` when loading sub-components, bypassing the user's explicit `--trust-remote-code=False` security opt-out. This enables remote code execution via malicious model repositories even when the user has explicitly disabled remote code trust. Version 0.18.0 patches the issue.
vLLM is an inference and serving engine for large language models (LLM ...
vLLM has Hardcoded Trust Override in Model Files Enables RCE Despite Explicit User Opt-Out
8.8 High
CVSS3