Описание
PraisonAIAgents: Environment Variable Secret Exfiltration via os.path.expandvars() Bypassing shell=False in Shell Tool
Summary
The execute_command function in shell_tools.py calls os.path.expandvars() on every command argument at line 64, manually re-implementing shell-level environment variable expansion despite using shell=False (line 88) for security. This allows exfiltration of secrets stored in environment variables (database credentials, API keys, cloud access keys). The approval system displays the unexpanded $VAR references to human reviewers, creating a deceptive approval where the displayed command differs from what actually executes.
Details
The vulnerable code is in src/praisonai-agents/praisonaiagents/tools/shell_tools.py:
The security problem is a disconnect between the approval display and actual execution:
- The LLM generates a tool call:
execute_command(command="cat $DATABASE_URL") _check_tool_approval_syncintool_execution.py:558passes{"command": "cat $DATABASE_URL"}to the approval backendConsoleBackend(backends.py:81-85) displayscommand: cat $DATABASE_URL— the literal dollar-sign form- The user approves, reasoning that
shell=Falseprevents variable expansion - Inside
execute_command,os.path.expandvars("$DATABASE_URL")→postgres://user:secretpass@prod-host:5432/mydb - The expanded secret appears in stdout, returned to the LLM
Line 69 has the same issue for the cwd parameter:
With PRAISONAI_AUTO_APPROVE=true (registry.py:170-171), AutoApproveBackend, YAML-approved tools, or AgentApproval, no human reviews the command at all. The env var auto-approve check is:
PoC
Verification without auto-approve (deceptive approval display):
Impact
- Secret exfiltration: All environment variables accessible to the process are exposed, including database credentials (
DATABASE_URL), cloud keys (AWS_SECRET_ACCESS_KEY,AWS_ACCESS_KEY_ID), API tokens (OPENAI_API_KEY,ANTHROPIC_API_KEY), and any other secrets passed via environment. - Deceptive approval: The approval UI shows
$VARreferences while the system executes with expanded secrets, undermining the human-in-the-loop security control. Users familiar withshell=Falsesemantics will expect no variable expansion. - Automated environments at highest risk: CI/CD pipelines and production deployments using
PRAISONAI_AUTO_APPROVE=true,AutoApproveBackend, or YAML tool pre-approval have no human review gate. These environments typically have the most sensitive secrets in environment variables. - Prompt injection amplifier: In agentic workflows processing untrusted content (documents, emails, web pages), a prompt injection can direct the LLM to call
execute_commandwith$VARreferences to exfiltrate specific secrets.
Recommended Fix
Remove os.path.expandvars() from command argument processing. Only keep os.path.expanduser() for tilde expansion (which is safe — it only expands ~ to the home directory path):
Similarly for cwd on line 69:
If environment variable expansion is needed for specific use cases, it should:
- Be opt-in via an explicit parameter (e.g.,
expand_env=Falsedefault) - Show the expanded command in the approval display so humans can see actual values
- Have an allowlist of safe variable names (e.g.,
HOME,USER,PATH) rather than expanding all variables
Пакеты
praisonaiagents
< 1.5.128
1.5.128
Связанные уязвимости
PraisonAIAgents is a multi-agent teams system. Prior to 1.5.128, the execute_command function in shell_tools.py calls os.path.expandvars() on every command argument at line 64, manually re-implementing shell-level environment variable expansion despite using shell=False (line 88) for security. This allows exfiltration of secrets stored in environment variables (database credentials, API keys, cloud access keys). The approval system displays the unexpanded $VAR references to human reviewers, creating a deceptive approval where the displayed command differs from what actually executes. This vulnerability is fixed in 1.5.128.