Описание
vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Versions starting from 0.8.0 and prior to 0.8.5 are affected by a critical performance vulnerability in the input preprocessing logic of the multimodal tokenizer. The code dynamically replaces placeholder tokens (e.g., <|audio_|>, <|image_|>) with repeated tokens based on precomputed lengths. Due to inefficient list concatenation operations, the algorithm exhibits quadratic time complexity (O(n²)), allowing malicious actors to trigger resource exhaustion via specially crafted inputs. This issue has been patched in version 0.8.5.
Отчет
Red Hat products are not using any affected version of the vllm component. Hence, Red Hat products are not affected by this vulnerability.
Меры по смягчению последствий
To mitigate this vulnerability:
- Precompute all placeholder positions and expansion lengths upfront.
- Replace dynamic list concatenation with a single preallocated array.
Затронутые пакеты
Платформа | Пакет | Состояние | Рекомендация | Релиз |
---|---|---|---|---|
Red Hat Enterprise Linux AI (RHEL AI) | rhelai1/bootc-amd-rhel9 | Not affected | ||
Red Hat Enterprise Linux AI (RHEL AI) | rhelai1/bootc-aws-nvidia-rhel9 | Not affected | ||
Red Hat Enterprise Linux AI (RHEL AI) | rhelai1/bootc-azure-amd-rhel9 | Not affected | ||
Red Hat Enterprise Linux AI (RHEL AI) | rhelai1/bootc-azure-nvidia-rhel9 | Not affected | ||
Red Hat Enterprise Linux AI (RHEL AI) | rhelai1/bootc-gcp-nvidia-rhel9 | Not affected | ||
Red Hat Enterprise Linux AI (RHEL AI) | rhelai1/bootc-ibm-nvidia-rhel9 | Not affected | ||
Red Hat Enterprise Linux AI (RHEL AI) | rhelai1/bootc-intel-rhel9 | Not affected | ||
Red Hat Enterprise Linux AI (RHEL AI) | rhelai1/bootc-nvidia-rhel9 | Not affected | ||
Red Hat Enterprise Linux AI (RHEL AI) | rhelai1/instructlab-amd-rhel9 | Not affected | ||
Red Hat Enterprise Linux AI (RHEL AI) | rhelai1/instructlab-intel-rhel9 | Not affected |
Показывать по
Ссылки на источники
Дополнительная информация
Статус:
EPSS
6.5 Medium
CVSS3
Связанные уязвимости
vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Versions starting from 0.8.0 and prior to 0.8.5 are affected by a critical performance vulnerability in the input preprocessing logic of the multimodal tokenizer. The code dynamically replaces placeholder tokens (e.g., <|audio_|>, <|image_|>) with repeated tokens based on precomputed lengths. Due to inefficient list concatenation operations, the algorithm exhibits quadratic time complexity (O(n²)), allowing malicious actors to trigger resource exhaustion via specially crafted inputs. This issue has been patched in version 0.8.5.
vLLM is a high-throughput and memory-efficient inference and serving e ...
phi4mm: Quadratic Time Complexity in Input Token Processing leads to denial of service
EPSS
6.5 Medium
CVSS3