Описание
vLLM is a library for LLM inference and serving. vllm/model_executor/weight_utils.py implements hf_model_weights_iterator to load the model checkpoint, which is downloaded from huggingface. It uses the torch.load function and the weights_only parameter defaults to False. When torch.load loads malicious pickle data, it will execute arbitrary code during unpickling. This vulnerability is fixed in v0.7.0.
A flaw was found in the vLLM package, a library for LLM inference and serving. The vllm/model_executor/weight_utils.py implements hf_model_weights_iterator to load the model checkpoint downloaded from huggingface. It uses the torch.load function, and the weights_only parameter defaults to False. When torch.load loads malicious pickle data, it will execute arbitrary code during unpickling. This vulnerability can be exploited to execute arbitrary codes and OS commands in the victim machine that fetches the pre-trained repo remotely.
Отчет
By default, torch.load uses weights_only=False which uses the pickle module provided by Python. This is inherently unsafe as the pickle module can import and execute arbitrary functions based on the byte-code within the pickle file.
Меры по смягчению последствий
Setting torch.load sets weights_only=True would rewrite the stack machine of the unpickler. This ensures modules and functions are only imported and called from allowlisted from the safe_globals in the checkpoint mitigating the vulnerability.
Затронутые пакеты
| Платформа | Пакет | Состояние | Рекомендация | Релиз |
|---|---|---|---|---|
| Red Hat Enterprise Linux AI (RHEL AI) | rhelai1/bootc-aws-nvidia-rhel9 | Will not fix | ||
| Red Hat Enterprise Linux AI (RHEL AI) | rhelai1/bootc-azure-amd-rhel9 | Not affected | ||
| Red Hat Enterprise Linux AI (RHEL AI) | rhelai1/bootc-azure-nvidia-rhel9 | Will not fix | ||
| Red Hat Enterprise Linux AI (RHEL AI) | rhelai1/bootc-gcp-nvidia-rhel9 | Will not fix | ||
| Red Hat Enterprise Linux AI (RHEL AI) | rhelai1/bootc-ibm-nvidia-rhel9 | Will not fix | ||
| Red Hat Enterprise Linux AI (RHEL AI) | rhelai1/bootc-intel-rhel9 | Will not fix | ||
| Red Hat Enterprise Linux AI (RHEL AI) | rhelai1/bootc-nvidia-rhel9 | Affected | ||
| Red Hat Enterprise Linux AI (RHEL AI) | rhelai1/instructlab-amd-rhel9 | Not affected | ||
| Red Hat Enterprise Linux AI (RHEL AI) | rhelai1/instructlab-intel-rhel9 | Will not fix | ||
| Red Hat Enterprise Linux AI (RHEL AI) | rhelai1/instructlab-nvidia-rhel9 | Will not fix |
Показывать по
Ссылки на источники
Дополнительная информация
Статус:
7.5 High
CVSS3
Связанные уязвимости
vLLM is a library for LLM inference and serving. vllm/model_executor/weight_utils.py implements hf_model_weights_iterator to load the model checkpoint, which is downloaded from huggingface. It uses the torch.load function and the weights_only parameter defaults to False. When torch.load loads malicious pickle data, it will execute arbitrary code during unpickling. This vulnerability is fixed in v0.7.0.
vLLM is a library for LLM inference and serving. vllm/model_executor/w ...
vllm: Malicious model to RCE by torch.load in hf_model_weights_iterator
7.5 High
CVSS3