Intrusive Thoughts

SuspiciousAudited by ClawScan on May 10, 2026.

Overview

This skill is not obviously malicious, but it deliberately gives an agent persistent autonomous schedules, memory, and broad tool-use prompts that can act while you are away.

Install only if you deliberately want a persistent autonomous agent. Before enabling it, review thoughts.json and config.json, keep integrations off, create only minimal cron jobs, require human approval for writes/posts/messages/installs/deletions, monitor OpenClaw cron entries, and keep the dashboard/data directory trusted.

Findings (6)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

ConcernHigh Confidence
ASI10: Rogue Agents
What this means

The agent may keep waking up and taking actions while you are asleep or away.

Why it was flagged

The skill instructs recurring overnight agent activity that can continue after setup and use the agent's normal tools, rather than limiting itself to a bounded read-only task.

Skill content
Night Workshop ... Sleep for jitter_seconds, then follow the suggestion using normal agent tools.
Recommendation

Only enable the cron jobs if you intentionally want this behavior; start with read-only suggestions and require explicit approval for writes, posts, messages, installs, deletions, or account actions.

What this means

If configured too broadly, the agent could decide to message, post, install tools, change systems, or delete data under learned trust rules.

Why it was flagged

The trust model explicitly covers high-impact actions and says they are 'almost always' escalated, not strictly forbidden or always user-approved.

Skill content
High: External messaging, system changes, deletions ... Critical: Public posts, financial operations → almost always escalate
Recommendation

Set a hard policy that autonomous runs may not perform external messaging, public posting, installs, deletions, financial actions, or system changes without a fresh human confirmation.

ConcernHigh Confidence
ASI05: Unexpected Code Execution
What this means

Opening the dashboard could execute a Python file outside the intended skill code path if the data directory is changed or replaced.

Why it was flagged

The dashboard executes an analyze.py file from the configurable data directory, so changing or poisoning that directory could cause the dashboard to run unexpected local Python code.

Skill content
subprocess.run(['python3', str(get_data_dir() / 'analyze.py'), '--json'], capture_output=True, text=True, timeout=10)
Recommendation

Run the dashboard only with a trusted config/data_dir, and prefer changing the code to execute analyze.py from the fixed skill directory rather than the runtime data directory.

What this means

Old activity logs, memories, or poisoned entries could affect future decisions and autonomy levels.

Why it was flagged

Persistent memory and trust state are designed to be reused across future actions, which can preserve sensitive context or let bad logs/prompts influence later autonomy.

Skill content
Multi-Store Memory — episodic, semantic, procedural memory with decay & consolidation ... Trust & Escalation — learns when to ask vs act autonomously, grows trust over time
Recommendation

Review and periodically clear the memory/trust stores, keep them in a private directory, and do not let untrusted content be written into files that guide future behavior.

What this means

If you add account tokens or enable integrations, the agent may be able to act through those accounts.

Why it was flagged

Optional account integrations are disclosed and disabled by default, but enabling them can give the autonomous system messaging or posting authority.

Skill content
"moltbook": { "enabled": false, "api_key_file": "", "username": "" }, "telegram": { "enabled": false
Recommendation

Keep integrations disabled unless needed, use least-privilege tokens, and require manual approval before any account posting or messaging.

What this means

A user may trust the skill as low-risk code while missing the higher-risk autonomous actions it asks the agent to perform.

Why it was flagged

The low-risk/read-only wording may understate the broader agent-mediated behavior described elsewhere, such as autonomous posts, messaging, tool use, and scheduled actions.

Skill content
Security Assessment ... Risk Level: LOW ... Network: Read-only access to public APIs only
Recommendation

Treat the security audit as code-level only; separately review and restrict the agent behaviors, cron jobs, integrations, and approval rules.