-
Notifications
You must be signed in to change notification settings - Fork 2.2k
Description
Security Concerns
Two separate code injection vulnerabilities found during security audit:
1. eval() on Marketplace-Sourced Data (Supply Chain Risk)
File: superagi/controllers/knowledges.py
The vector_ids field from marketplace-sourced knowledge is evaluated with eval() during knowledge uninstall operations. A malicious marketplace knowledge package could contain crafted vector_ids that execute arbitrary code when the knowledge is uninstalled.
Additionally, agent configurations (goal, instruction, constraints) are processed with eval():
superagi/helper/resource_helper.py
Fix: Replace eval() with json.loads() or ast.literal_eval().
2. eval() on LLM Output in Task Processing
Files:
superagi/jobs/agent_executor.py-output_handler.py:149,180-eval(assistant_reply)on LLM responsesuperagi/lib/queue_step_handler.py:79-eval(assistant_reply)for task queue
The JsonCleaner used before eval() provides no real sanitization - it just extracts text between [ and ] brackets. An indirect prompt injection in processed data could cause the LLM to generate malicious Python code that gets executed.
Example payload: [__import__('os').system('id')]
Fix: Replace eval() with json.loads():
import json
tasks = json.loads(assistant_reply)Note
I attempted to use GitHub's private vulnerability reporting but it appears to be disabled for this repository. Consider enabling it at Settings → Code security → Private vulnerability reporting.
Discovered during security audit by Lighthouse Research Project (https://lighthouse1212.com)