Логотип exploitDog
Консоль
Логотип exploitDog

exploitDog

redhat логотип

CVE-2026-27893

Опубликовано: 26 мар. 2026
Источник: redhat
CVSS3: 8.8

Описание

vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.18.0, two model implementation files hardcode trust_remote_code=True when loading sub-components, bypassing the user's explicit --trust-remote-code=False security opt-out. This enables remote code execution via malicious model repositories even when the user has explicitly disabled remote code trust. Version 0.18.0 patches the issue.

A flaw was found in vLLM, an inference and serving engine for large language models (LLMs). Two model implementation files hardcode trust_remote_code=True when loading sub-components. This bypasses the user's explicit --trust-remote-code=False security opt-out, allowing a remote attacker to achieve remote code execution through malicious model repositories.

Отчет

This is an Important vulnerability in vLLM, as shipped in Red Hat AI Inference Server and Red Hat OpenShift AI. The flaw allows remote code execution due to vLLM hardcoding trust_remote_code=True when loading sub-components, which bypasses the user's explicit --trust-remote-code=False security opt-out. This can lead to exploitation through malicious model repositories.

Меры по смягчению последствий

Mitigation for this issue is either not available or the currently available options do not meet the Red Hat Product Security criteria comprising ease of use and deployment, applicability to widespread installation base, or stability.

Затронутые пакеты

ПлатформаПакетСостояниеРекомендацияРелиз
Red Hat AI Inference Serverrhaiis-preview/vllm-cuda-rhel9Affected
Red Hat AI Inference Serverrhaiis/vllm-cpu-rhel9Affected
Red Hat AI Inference Serverrhaiis/vllm-cuda-rhel9Affected
Red Hat AI Inference Serverrhaiis/vllm-neuron-rhel9Affected
Red Hat AI Inference Serverrhaiis/vllm-rocm-rhel9Affected
Red Hat AI Inference Serverrhaiis/vllm-spyre-rhel9Affected
Red Hat AI Inference Serverrhaiis/vllm-tpu-rhel9Affected
Red Hat Enterprise Linux AI (RHEL AI) 3rhelai3/bootc-aws-cuda-rhel9Affected
Red Hat Enterprise Linux AI (RHEL AI) 3rhelai3/bootc-azure-cuda-rhel9Affected
Red Hat Enterprise Linux AI (RHEL AI) 3rhelai3/bootc-azure-rocm-rhel9Affected

Показывать по

Дополнительная информация

Статус:

Important
Дефект:
CWE-501
https://bugzilla.redhat.com/show_bug.cgi?id=2452055vllm: vLLM: Remote code execution due to hardcoded trust_remote_code setting

8.8 High

CVSS3

Связанные уязвимости

CVSS3: 8.8
nvd
5 дней назад

vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.18.0, two model implementation files hardcode `trust_remote_code=True` when loading sub-components, bypassing the user's explicit `--trust-remote-code=False` security opt-out. This enables remote code execution via malicious model repositories even when the user has explicitly disabled remote code trust. Version 0.18.0 patches the issue.

CVSS3: 8.8
debian
5 дней назад

vLLM is an inference and serving engine for large language models (LLM ...

CVSS3: 8.8
github
4 дня назад

vLLM has Hardcoded Trust Override in Model Files Enables RCE Despite Explicit User Opt-Out

8.8 High

CVSS3