Описание
PraisonAI Vulnerable to Implicit Execution of Arbitrary Code via Automatic tools.py Loading
PraisonAI automatically loads a file named tools.py from the current working directory to discover and register custom agent tools. This loading process uses importlib.util.spec_from_file_location and immediately executes module-level code via spec.loader.exec_module() without explicit user consent, validation, or sandboxing.
The tools.py file is loaded implicitly, even when it is not referenced in configuration files or explicitly requested by the user. As a result, merely placing a file named tools.py in the working directory is sufficient to trigger code execution.
This behavior violates the expected security boundary between user-controlled project files (e.g., YAML configurations) and executable code, as untrusted content in the working directory is treated as trusted and executed automatically.
If an attacker can place a malicious tools.py file into a directory where a user or automated system (e.g., CI/CD pipeline) runs praisonai, arbitrary code execution occurs immediately upon startup, before any agent logic begins.
Vulnerable Code Location
src/praisonai/praisonai/tool_resolver.py → ToolResolver._load_local_tools
Reproducing the Attack
- Create a malicious
tools.pyin the target directory:
-
Create any valid
agents.yaml. -
Run:
- Observe:
[PWNED]is printedpwned.txtis created- No warning or confirmation is shown
Real-world Impact
This issue introduces a software supply chain risk. If an attacker introduces a malicious tools.py into a repository (e.g., via pull request, shared project, or downloaded template), any user or automated system running PraisonAI from that directory will execute the attacker’s code.
Affected scenarios include:
- CI/CD pipelines processing untrusted repositories
- Shared development environments
- AI workflow automation systems
- Public project templates or examples
Successful exploitation can lead to:
- Execution of arbitrary commands
- Exfiltration of environment variables and credentials
- Persistence mechanisms on developer or CI systems
Remediation Steps
-
Require explicit opt-in for loading
tools.py- Introduce a CLI flag (e.g.,
--load-tools) or config option - Disable automatic loading by default
- Introduce a CLI flag (e.g.,
-
Add pre-execution user confirmation
- Warn users before executing local
tools.py - Allow users to decline execution
- Warn users before executing local
-
Restrict trusted paths
- Only load tools from explicitly defined project directories
- Avoid defaulting to the current working directory
-
Avoid executing module-level code during discovery
- Use static analysis (e.g., AST parsing) to identify tool functions
- Require explicit registration functions instead of import side effects
-
Optional hardening
- Support sandboxed execution (subprocess / restricted environment)
- Provide hash verification or signing for trusted tool files
Пакеты
praisonai
< 4.5.128
4.5.128
Связанные уязвимости
PraisonAI is a multi-agent teams system. Prior to 4.5.128, PraisonAI automatically loads a file named tools.py from the current working directory to discover and register custom agent tools. This loading process uses importlib.util.spec_from_file_location and immediately executes module-level code via spec.loader.exec_module() without explicit user consent, validation, or sandboxing. The tools.py file is loaded implicitly, even when it is not referenced in configuration files or explicitly requested by the user. As a result, merely placing a file named tools.py in the working directory is sufficient to trigger code execution. This behavior violates the expected security boundary between user-controlled project files (e.g., YAML configurations) and executable code, as untrusted content in the working directory is treated as trusted and executed automatically. If an attacker can place a malicious tools.py file into a directory where a user or automated system (e.g., CI/CD pipeline) runs