Количество 4
Количество 4
CVE-2025-24357
vLLM is a library for LLM inference and serving. vllm/model_executor/weight_utils.py implements hf_model_weights_iterator to load the model checkpoint, which is downloaded from huggingface. It uses the torch.load function and the weights_only parameter defaults to False. When torch.load loads malicious pickle data, it will execute arbitrary code during unpickling. This vulnerability is fixed in v0.7.0.
CVE-2025-24357
vLLM is a library for LLM inference and serving. vllm/model_executor/weight_utils.py implements hf_model_weights_iterator to load the model checkpoint, which is downloaded from huggingface. It uses the torch.load function and the weights_only parameter defaults to False. When torch.load loads malicious pickle data, it will execute arbitrary code during unpickling. This vulnerability is fixed in v0.7.0.
CVE-2025-24357
vLLM is a library for LLM inference and serving. vllm/model_executor/w ...
GHSA-rh4j-5rhw-hr54
vllm: Malicious model to RCE by torch.load in hf_model_weights_iterator
Уязвимостей на страницу
Уязвимость | CVSS | EPSS | Опубликовано | |
|---|---|---|---|---|
CVE-2025-24357 vLLM is a library for LLM inference and serving. vllm/model_executor/weight_utils.py implements hf_model_weights_iterator to load the model checkpoint, which is downloaded from huggingface. It uses the torch.load function and the weights_only parameter defaults to False. When torch.load loads malicious pickle data, it will execute arbitrary code during unpickling. This vulnerability is fixed in v0.7.0. | CVSS3: 7.5 | 0% Низкий | 11 месяцев назад | |
CVE-2025-24357 vLLM is a library for LLM inference and serving. vllm/model_executor/weight_utils.py implements hf_model_weights_iterator to load the model checkpoint, which is downloaded from huggingface. It uses the torch.load function and the weights_only parameter defaults to False. When torch.load loads malicious pickle data, it will execute arbitrary code during unpickling. This vulnerability is fixed in v0.7.0. | CVSS3: 7.5 | 0% Низкий | 11 месяцев назад | |
CVE-2025-24357 vLLM is a library for LLM inference and serving. vllm/model_executor/w ... | CVSS3: 7.5 | 0% Низкий | 11 месяцев назад | |
GHSA-rh4j-5rhw-hr54 vllm: Malicious model to RCE by torch.load in hf_model_weights_iterator | CVSS3: 7.5 | 0% Низкий | 11 месяцев назад |
Уязвимостей на страницу