suspicious.dangerous_exec
- Location
- src/cli.js:212
- Finding
- Shell command execution detected (child_process).
AdvisoryAudited by Static analysis on May 10, 2026.
Detected: suspicious.dangerous_exec
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
A secondary model's text may influence the agent's answer even though that text may be incomplete, stale, or prompt-injected by the original user message.
The skill intentionally changes the agent's response workflow by adding generated secondary-model text into the agent's context.
Before responding to any user message, check for a dual-brain perspective... Synthesize both viewpoints (yours + the secondary LLM's)
Treat the perspective file as advisory only; do not let it override the user's request, safety rules, or tool-use approvals.
Private or unrelated agent-session messages may be read and processed by the daemon without a per-conversation selection step.
The daemon scans broad local agent session directories, including an additional .moltbot path, and the default empty ownerIds setting allows processing broad/main sessions rather than only explicitly selected conversations.
path.join(os.homedir(), '.openclaw'), path.join(os.homedir(), '.moltbot') ... const isOwner = cfg.ownerIds.length === 0 || ... || file.name.includes('main');Restrict the watched paths and ownerIds before use, and avoid running the daemon on machines or profiles containing sensitive sessions.
If a remote provider such as Groq, OpenAI, or Moonshot is selected, user prompts and potentially sensitive session content can leave the local machine.
For a remote provider, the code sends user-message content to an external LLM API using the configured provider credential.
{ role: 'user', content: `Human to ${agentId}: "${userMessage.slice(0, 1000)}"` } ... 'Authorization': `Bearer ${this.apiKey}`Use the local Ollama provider for sensitive work, or require explicit user consent and clear data-retention expectations before forwarding messages to remote LLMs.
Agents may reuse stale or poisoned secondary-model content, and optional semantic memory storage can make that content persist across future tasks.
Generated secondary-LLM output is written into a reusable perspective file and can also be posted to Engram memory, but the perspective file write shown here does not include timestamp/source metadata.
fs.writeFileSync(file, `\n${perspective}\n`); ... content: `[Dual-Brain for ${agentId}] ${perspective}` ... path: '/api/memories'Add timestamp, provider, source-message ID, and freshness checks to stored perspectives; disable Engram unless long-term storage is clearly wanted.
Once installed as a service, the watcher may continue running and processing new sessions until explicitly stopped or uninstalled.
The service installer is designed to keep the daemon running automatically after login/boot.
<key>RunAtLoad</key><true/> ... <key>KeepAlive</key><true/> ... Restart=always
Install the daemon service only if you want continuous monitoring; verify how to stop, disable, and remove it before enabling auto-start.
Other local users or processes may be able to read provider API keys if filesystem permissions are not tightened.
Provider credentials are expected for remote LLM integrations, but the artifacts explicitly state they are stored unencrypted with overly broad file permissions.
API Keys in Plaintext - Stored in `~/.dual-brain/config.json` - Not encrypted ... File permissions: 0644 (should be 0600)
Store keys in an OS keychain or set the config file to mode 0600, and rotate any key exposed on a shared system.
The installer can modify local service configuration and run system service commands when the user invokes install-daemon.
The CLI runs local shell commands during the user-invoked service installation path.
const nodePath = execSync('which node').toString().trim(); ... execSync(`launchctl load ${plistPath}`);Inspect service files before loading them, keep install-daemon user-initiated, and quote or sanitize paths in future revisions.
Users have less provenance information for verifying that the installed package matches the reviewed artifacts.
The registry metadata does not provide a verified source/homepage or install spec, while the documentation instructs users to install a global npm package.
Source: unknown; Homepage: none ... No install spec — this is an instruction-only skill.
Verify the npm package publisher and source repository before installing globally, and prefer a pinned, auditable release.