Логотип exploitDog
Консоль
Логотип exploitDog

exploitDog

redhat логотип

CVE-2026-22778

Опубликовано: 02 фев. 2026
Источник: redhat
CVSS3: 9.8
EPSS Низкий

Описание

vLLM is an inference and serving engine for large language models (LLMs). From 0.8.3 to before 0.14.1, when an invalid image is sent to vLLM's multimodal endpoint, PIL throws an error. vLLM returns this error to the client, leaking a heap address. With this leak, we reduce ASLR from 4 billion guesses to ~8 guesses. This vulnerability can be chained a heap overflow with JPEG2000 decoder in OpenCV/FFmpeg to achieve remote code execution. This vulnerability is fixed in 0.14.1.

A flaw was found in vLLM, an inference and serving engine for large language models (LLMs). A remote attacker can exploit this vulnerability by sending a specially crafted video URL to vLLM's multimodal endpoint. This action causes vLLM to leak a heap memory address, significantly reducing the effectiveness of Address Space Layout Randomization (ASLR). This information disclosure can then be chained with a heap overflow vulnerability to achieve remote code execution.

Отчет

This vulnerability is rated Critical rather than Important because it allows unauthenticated remote code execution without requiring user interaction, ultimately leading to full compromise of the affected system. An attacker can provide a malicious video URL to a vulnerable vLLM inference endpoint, which causes the service to automatically retrieve and process attacker-controlled media content. During decoding, a heap overflow is triggered in the underlying video processing stack, enabling corruption of heap memory and potential overwriting of control structures to execute arbitrary commands on the host. In addition, an information disclosure condition can leak memory addresses, significantly weakening ASLR protections and making exploitation more reliable when combined with the heap overflow. Successful exploitation compromises the confidentiality, integrity, and availability of the system and can impact deployments such as Red Hat AI Inference Server, Red Hat Enterprise Linux AI, and Red Hat OpenShift AI, thereby meeting Red Hat’s criteria for Critical severity rather than Important impact. The vLLM vulnerability depends on CVE-2025-9951, as processing attacker-controlled media can trigger the JPEG2000 decoder heap overflow, which can then be exploited within the vLLM video handling pipeline to cause memory corruption and potentially achieve remote code execution.

Меры по смягчению последствий

Mitigation for this issue is either not available or the currently available options do not meet the Red Hat Product Security criteria comprising ease of use and deployment, applicability to widespread installation base, or stability.

Затронутые пакеты

ПлатформаПакетСостояниеРекомендацияРелиз
Red Hat AI Inference Serverrhaiis/vllm-spyre-rhel9Affected
Red Hat AI Inference Serverrhaiis/vllm-tpu-rhel9Affected
Red Hat Enterprise Linux AI (RHEL AI) 3rhelai3/bootc-aws-cuda-rhel9Affected
Red Hat Enterprise Linux AI (RHEL AI) 3rhelai3/bootc-azure-cuda-rhel9Affected
Red Hat Enterprise Linux AI (RHEL AI) 3rhelai3/bootc-cuda-rhel9Affected
Red Hat Enterprise Linux AI (RHEL AI) 3rhelai3/bootc-gcp-cuda-rhel9Affected
Red Hat OpenShift AI (RHOAI)rhoai/odh-kserve-agent-rhel9Not affected
Red Hat OpenShift AI (RHOAI)rhoai/odh-kserve-controller-rhel9Not affected
Red Hat OpenShift AI (RHOAI)rhoai/odh-kserve-router-rhel9Not affected
Red Hat OpenShift AI (RHOAI)rhoai/odh-kserve-storage-initializer-rhel9Not affected

Показывать по

Дополнительная информация

Статус:

Critical
Дефект:
CWE-209
https://bugzilla.redhat.com/show_bug.cgi?id=2436113vLLM: vLLM: Remote code execution via invalid image processing in the multimodal endpoint.

EPSS

Процентиль: 24%
0.00084
Низкий

9.8 Critical

CVSS3

Связанные уязвимости

CVSS3: 9.8
nvd
около 2 месяцев назад

vLLM is an inference and serving engine for large language models (LLMs). From 0.8.3 to before 0.14.1, when an invalid image is sent to vLLM's multimodal endpoint, PIL throws an error. vLLM returns this error to the client, leaking a heap address. With this leak, we reduce ASLR from 4 billion guesses to ~8 guesses. This vulnerability can be chained a heap overflow with JPEG2000 decoder in OpenCV/FFmpeg to achieve remote code execution. This vulnerability is fixed in 0.14.1.

CVSS3: 9.8
debian
около 2 месяцев назад

vLLM is an inference and serving engine for large language models (LLM ...

CVSS3: 9.8
github
около 2 месяцев назад

vLLM has RCE In Video Processing

EPSS

Процентиль: 24%
0.00084
Низкий

9.8 Critical

CVSS3