Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Dual-Brain

Automatically generates and saves alternative perspectives from a secondary LLM for every user message to enhance reasoning and response quality.

MIT-0 · Free to use, modify, and redistribute. No attribution required.
2 · 1.6k · 1 current installs · 1 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/purpose (Dual-Brain: secondary-LLM perspectives) aligns with the code and SKILL.md: the daemon scans OpenClaw session files, calls secondary LLM providers, and writes 2–3 sentence perspectives to ~/.dual-brain/perspectives. However the registry metadata claims no required credentials or env vars while the implementation expects provider API keys (stored in a config file); that mismatch should be noted. Reading OpenClaw session files and optionally posting to external LLM APIs is consistent with the stated purpose but is higher-privilege than a purely local helper.
!
Instruction Scope
SKILL.md instructs primary agents to 'Before responding to any user message, check for a dual-brain perspective' by reading a file under ~/.dual-brain/perspectives/{agent}-latest.md and to 'consider it alongside your own reasoning.' That is functionally coherent but is also a persistent instruction that influences agent behavior (a prompt-injection-like pattern). The daemon scans ~/.openclaw/agents/*/sessions/*.jsonl (i.e., other agents' session files) to detect user messages; scanning other agents' session files and then sending user text to external LLMs is a privacy-sensitive action. The instructions are specific (file paths and CLI commands) — not vague — but they give the skill direct, ongoing influence over agent outputs and require the agent to access the user's home filesystem.
Install Mechanism
There is no automated install spec in the registry (instruction-only), but the package includes an npm-global CLI and installer scripts (daemon/install.sh, systemd/launchd templates). Manual install commands in SKILL.md (npm install -g, dual-brain install-daemon) would install a daemon and optionally a systemd/LaunchAgent service. The package.json has no postinstall scripts, lowering automatic-install risk, but installing the service requires writing system files (sudo) and will create a persistent process. No remote download URLs are used; code is included in the skill bundle.
!
Credentials
Registry lists no required env vars, but the code expects API keys for moonshot/openai/groq providers and stores them in ~/.dual-brain/config.json in plaintext. BUILD-SUMMARY and config.js note API keys will be saved unencrypted and file perms are lax (documented as 0644). The daemon reads OpenClaw session JSONL files (potentially sensitive user messages) and forwards content to external LLM providers — that data flow is needed for the feature but is sensitive and must be consented to. The skill does not request unrelated credentials (no AWS, etc.), so the set of credentials is proportionate to its function, but storage and permissions are insecure.
Persistence & Privilege
always:false (good). The skill supports installing a long-running user or system service (launchd/systemd templates) and writes PID/log files under ~/.dual-brain. Persistent background operation is required for the described functionality but increases blast radius (continuous scanning of session files and ability to call external services). The combination of persistent daemon + reading session files + plaintext-stored API keys increases risk — particularly if the daemon is installed as a system service with broad file access or run as root (installer suggests sudo for systemd installation).
Scan Findings in Context
[system-prompt-override] expected: SKILL.md explicitly instructs agents to read and incorporate an external perspective file before responding, which is the core feature (so the presence of 'system-prompt-override'-style patterns is expected). However this pattern is a prompt-injection vector: it persistently changes agent behavior and could be abused if perspectives contain malicious or exfiltrative instructions.
What to consider before installing
This skill implements a background daemon that watches OpenClaw session files, sends user messages to a secondary LLM provider, and writes a short 'perspective' file that agents are told to read before replying. That design is coherent, but before installing you should consider: 1) The daemon will read session JSONL files under ~/.openclaw and forward user text to external LLMs — ensure you are comfortable with that data leaving your machine/provider. 2) API keys for some providers are stored in ~/.dual-brain/config.json in plaintext (documented); tighten file permissions (0600) or prefer local-only providers (Ollama) if you need privacy. 3) SKILL.md asks agents to unconditionally read and use the perspective file before replying — this is a powerful directive (prompt-injection risk); verify and limit which agents/processes are allowed to use it. 4) Installing as a system service requires elevated privileges and makes the daemon persistent; prefer running it as your user in the foreground for testing first. If you want to proceed: use a local-only provider (Ollama) or run the daemon in a restricted environment, inspect the config and perspective files regularly, and do not install the service system-wide until you are comfortable with the file access and network behavior. If you need more assurance, ask the author for encrypted API-key support (OS keychain) and a strict permission model.

Like a lobster shell, security has layers — review code before you run it.

Current versionv0.1.1
Download zip
cognitive-diversityvk97a5rfcxwtbg1w2n3gdc69jax80j573dual-brainvk97a5rfcxwtbg1w2n3gdc69jax80j573latestvk97a5rfcxwtbg1w2n3gdc69jax80j573multi-llmvk97a5rfcxwtbg1w2n3gdc69jax80j573second-opinionvk97a5rfcxwtbg1w2n3gdc69jax80j573

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Dual-Brain Skill

What it does: Provides cognitive diversity by automatically generating perspectives from a secondary LLM (different from your primary model) for every user message.

For Agents

Before responding to any user message, check for a dual-brain perspective:

cat ~/.dual-brain/perspectives/{your-agent-id}-latest.md

If the file exists and contains a recent perspective (check timestamp in comment), consider it alongside your own reasoning. The perspective may highlight:

  • Angles you might miss
  • Alternative approaches
  • Important things to verify

Integration:

  • Perspectives are generated automatically by a background daemon
  • No manual action required on your part
  • Simply read the file before forming your response
  • Synthesize both viewpoints (yours + the secondary LLM's)

Example workflow:

  1. User sends message
  2. Daemon intercepts, sends to secondary LLM (e.g., Kimi, GPT-4, Llama)
  3. Secondary LLM generates 2-3 sentence perspective
  4. Written to ~/.dual-brain/perspectives/{agent-id}-latest.md
  5. You (primary agent) read it before responding
  6. You synthesize both perspectives into your answer

For Humans

Setup:

npm install -g openclaw-dual-brain
dual-brain setup    # Interactive configuration
dual-brain start    # Start daemon

Providers:

  • ollama - Local models (zero cost, requires Ollama)
  • moonshot - Kimi/Moonshot API (Chinese LLM, fast)
  • openai - GPT-4o, GPT-4-turbo, etc.
  • groq - Fast inference with Llama models

Commands:

  • dual-brain setup - Configure provider, model, API key
  • dual-brain start - Run daemon (foreground)
  • dual-brain stop - Stop daemon
  • dual-brain status - Check running status
  • dual-brain logs - View recent activity
  • dual-brain install-daemon - Install as system service

Config location: ~/.dual-brain/config.json

Perspectives location: ~/.dual-brain/perspectives/

Architecture

User Message → OpenClaw Session (JSONL)
                    ↓
            Dual-Brain Daemon (polling)
                    ↓
            Secondary LLM Provider
            (ollama/moonshot/openai/groq)
                    ↓
        Perspective Generated (2-3 sentences)
                    ↓
        ~/.dual-brain/perspectives/{agent}-latest.md
                    ↓
        Primary Agent reads & synthesizes
                    ↓
            Response to User

Benefits

  • Cognitive diversity - Two AI models = broader perspective
  • Bias mitigation - Different training data/approaches
  • Quality assurance - Second opinion catches issues
  • Zero agent overhead - Runs in background, <1s latency
  • Provider flexibility - Choose cost vs. quality tradeoff

Optional: Engram Integration

If Engram (semantic memory) is running on localhost:3400, perspectives are also stored as memories for long-term recall.


Source: https://github.com/yourusername/openclaw-dual-brain

Files

16 total
Select a file
Select a file to preview.

Comments

Loading comments…