Количество 4
Количество 4
CVE-2025-62164
vLLM is an inference and serving engine for large language models (LLMs). From versions 0.10.2 to before 0.11.1, a memory corruption vulnerability could lead to a crash (denial-of-service) and potentially remote code execution (RCE), exists in the Completions API endpoint. When processing user-supplied prompt embeddings, the endpoint loads serialized tensors using torch.load() without sufficient validation. Due to a change introduced in PyTorch 2.8.0, sparse tensor integrity checks are disabled by default. As a result, maliciously crafted tensors can bypass internal bounds checks and trigger an out-of-bounds memory write during the call to to_dense(). This memory corruption can crash vLLM and potentially lead to code execution on the server hosting vLLM. This issue has been patched in version 0.11.1.
CVE-2025-62164
vLLM is an inference and serving engine for large language models (LLM ...
GHSA-mrw7-hf4f-83pf
vLLM deserialization vulnerability leading to DoS and potential RCE
BDU:2025-14675
Уязвимость компонента Completions API библиотеки для работы с большими языковыми моделями (LLM) vLLM, позволяющая нарушителю вызвать отказ в обслуживании и выполнить произвольный код
Уязвимостей на страницу
Уязвимость | CVSS | EPSS | Опубликовано | |
|---|---|---|---|---|
CVE-2025-62164 vLLM is an inference and serving engine for large language models (LLMs). From versions 0.10.2 to before 0.11.1, a memory corruption vulnerability could lead to a crash (denial-of-service) and potentially remote code execution (RCE), exists in the Completions API endpoint. When processing user-supplied prompt embeddings, the endpoint loads serialized tensors using torch.load() without sufficient validation. Due to a change introduced in PyTorch 2.8.0, sparse tensor integrity checks are disabled by default. As a result, maliciously crafted tensors can bypass internal bounds checks and trigger an out-of-bounds memory write during the call to to_dense(). This memory corruption can crash vLLM and potentially lead to code execution on the server hosting vLLM. This issue has been patched in version 0.11.1. | CVSS3: 8.8 | 0% Низкий | 3 месяца назад | |
CVE-2025-62164 vLLM is an inference and serving engine for large language models (LLM ... | CVSS3: 8.8 | 0% Низкий | 3 месяца назад | |
GHSA-mrw7-hf4f-83pf vLLM deserialization vulnerability leading to DoS and potential RCE | CVSS3: 8.8 | 0% Низкий | 3 месяца назад | |
BDU:2025-14675 Уязвимость компонента Completions API библиотеки для работы с большими языковыми моделями (LLM) vLLM, позволяющая нарушителю вызвать отказ в обслуживании и выполнить произвольный код | CVSS3: 8.8 | 0% Низкий | 3 месяца назад |
Уязвимостей на страницу