Логотип exploitDog
Консоль
Логотип exploitDog

exploitDog

github логотип

GHSA-pmqf-x6x8-p7qw

Опубликовано: 20 нояб. 2025
Источник: github
Github: Прошло ревью
CVSS4: 8.3
CVSS3: 6.5

Описание

vLLM vulnerable to DoS with incorrect shape of multimodal embedding inputs

Summary

Users can crash the vLLM engine serving multimodal models by passing multimodal embedding inputs with correct ndim but incorrect shape (e.g. hidden dimension is wrong), regardless of whether the model is intended to support such inputs (as defined in the Supported Models page).

The issue has existed ever since we added support for image embedding inputs, i.e. #6613 (released in v0.5.5)

Details

Using image embeddings as an example:

  • For models that support image embedding inputs, the engine crashes when scattering the embeddings to inputs_embeds (mismatched shape)
  • For models that don't support image embedding inputs, the engine crashes when validating the inputs inside get_input_embeddings (validation fails).

This happens because we only validate ndim of the tensor, but not the full shape, in input processor (via MultiModalDataParser).

Impact

  • Denial of service by crashing the engine

Mitigation

  • Use API key to limit access to trusted users.
  • Set --limit-mm-per-prompt to 0 for all non-text modalities to ban multimodal inputs, which includes multimodal embedding inputs. However, the model would then only accept text, defeating the purpose of using a multi-modal model.

Resolution

Пакеты

Наименование

vllm

pip
Затронутые версииВерсия исправления

>= 0.5.5, < 0.11.1

0.11.1

EPSS

Процентиль: 20%
0.00064
Низкий

8.3 High

CVSS4

6.5 Medium

CVSS3

Дефекты

CWE-129

Связанные уязвимости

CVSS3: 6.5
nvd
3 месяца назад

vLLM is an inference and serving engine for large language models (LLMs). From version 0.5.5 to before 0.11.1, users can crash the vLLM engine serving multimodal models by passing multimodal embedding inputs with correct ndim but incorrect shape (e.g. hidden dimension is wrong), regardless of whether the model is intended to support such inputs (as defined in the Supported Models page). This issue has been patched in version 0.11.1.

CVSS3: 6.5
debian
3 месяца назад

vLLM is an inference and serving engine for large language models (LLM ...

CVSS3: 7.7
fstec
3 месяца назад

Уязвимость библиотеки для работы с большими языковыми моделями (LLM) vLLM, связанная с непроверенным индексированием массива, позволяющая нарушителю вызвать отказ в обслуживании

EPSS

Процентиль: 20%
0.00064
Низкий

8.3 High

CVSS4

6.5 Medium

CVSS3

Дефекты

CWE-129