Skip to content

Security: eval() on marketplace data and LLM output enables code injection #1489

@lighthousekeeper1212

Description

@lighthousekeeper1212

Security Concerns

Two separate code injection vulnerabilities found during security audit:

1. eval() on Marketplace-Sourced Data (Supply Chain Risk)

File: superagi/controllers/knowledges.py

The vector_ids field from marketplace-sourced knowledge is evaluated with eval() during knowledge uninstall operations. A malicious marketplace knowledge package could contain crafted vector_ids that execute arbitrary code when the knowledge is uninstalled.

Additionally, agent configurations (goal, instruction, constraints) are processed with eval():

  • superagi/helper/resource_helper.py

Fix: Replace eval() with json.loads() or ast.literal_eval().

2. eval() on LLM Output in Task Processing

Files:

  • superagi/jobs/agent_executor.py - output_handler.py:149,180 - eval(assistant_reply) on LLM response
  • superagi/lib/queue_step_handler.py:79 - eval(assistant_reply) for task queue

The JsonCleaner used before eval() provides no real sanitization - it just extracts text between [ and ] brackets. An indirect prompt injection in processed data could cause the LLM to generate malicious Python code that gets executed.

Example payload: [__import__('os').system('id')]

Fix: Replace eval() with json.loads():

import json
tasks = json.loads(assistant_reply)

Note

I attempted to use GitHub's private vulnerability reporting but it appears to be disabled for this repository. Consider enabling it at Settings → Code security → Private vulnerability reporting.


Discovered during security audit by Lighthouse Research Project (https://lighthouse1212.com)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions