Описание
vLLM is an inference and serving engine for large language models (LLMs). From version 0.5.5 to before 0.11.1, users can crash the vLLM engine serving multimodal models by passing multimodal embedding inputs with correct ndim but incorrect shape (e.g. hidden dimension is wrong), regardless of whether the model is intended to support such inputs (as defined in the Supported Models page). This issue has been patched in version 0.11.1.
Пакеты
| Пакет | Статус | Версия исправления | Релиз | Тип |
|---|---|---|---|---|
| vllm | itp | package |
Связанные уязвимости
vLLM is an inference and serving engine for large language models (LLMs). From version 0.5.5 to before 0.11.1, users can crash the vLLM engine serving multimodal models by passing multimodal embedding inputs with correct ndim but incorrect shape (e.g. hidden dimension is wrong), regardless of whether the model is intended to support such inputs (as defined in the Supported Models page). This issue has been patched in version 0.11.1.
vLLM vulnerable to DoS with incorrect shape of multimodal embedding inputs
Уязвимость библиотеки для работы с большими языковыми моделями (LLM) vLLM, связанная с непроверенным индексированием массива, позволяющая нарушителю вызвать отказ в обслуживании