Cerebrun

SuspiciousAudited by ClawScan on May 10, 2026.

Overview

Cerebrun appears to be a coherent memory-service client, but it gives the agent broad access to persistent personal context, identity/API-key data, and conversation history without clearly bounded controls.

Install only if you trust Cerebrun with personal memory and conversation history. Use a least-privilege API key if available, avoid storing raw secrets in readable context layers, and instruct the agent to ask before reading Layer 2/3 data, writing memories, or sending content through the LLM gateway.

Findings (3)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

If the agent invokes sensitive context retrieval too broadly, personal identity details or stored API keys could be pulled into the agent conversation or reused unexpectedly.

Why it was flagged

The skill requires account-level API authentication and explicitly exposes a context layer that may contain identity data and API keys. The artifacts do not require field minimization or a separate approval gate for this layer.

Skill content
All requests require:
- `api_key`: Cerebrun API key (Bearer token)
...
**Layer 2** - Personal identity info, API and other keys
Recommendation

Use only with a trusted Cerebrun account, avoid storing raw API keys in broadly readable context layers, and require explicit user approval before accessing Layer 2 or any vault-related data.

What this means

Incorrect, sensitive, or prompt-injection-like content could be saved and later influence future agent sessions or searches.

Why it was flagged

The script can write arbitrary user-supplied data into persistent context and knowledge storage, but the artifacts do not describe safeguards such as confirmation, provenance labels, review, deletion, or rollback.

Skill content
def update_context(api_key: str, layer: int, data: dict):
    """Update user context for specified layer"""
    return make_request(api_key, "update_context", {"layer": layer, "data": data})
...
def push_knowledge(...):
    """Store knowledge in user's Knowledge Base"""
Recommendation

Require confirmation before persistent writes, record source/provenance for saved memories, and provide clear user controls for reviewing and deleting stored context.

What this means

Sensitive content included in gateway messages may be processed outside the local agent environment.

Why it was flagged

The LLM Gateway behavior is disclosed and purpose-aligned, but it means user messages and possibly contextual data can pass through Cerebrun to selected LLM providers.

Skill content
### chat_with_llm
Send message to an LLM through the Gateway.

**Parameters:**
- `message` (required): Message to send
- `provider` (required): LLM provider
- `model` (required): Model name
Recommendation

Confirm the destination provider before use and avoid sending secrets or highly sensitive personal context through the gateway unless necessary.