Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
proactive-agent-3.1.0
v1.0.0Transform AI agents from task-followers into proactive partners that anticipate needs and continuously improve. Now with WAL Protocol, Working Buffer, Autono...
⭐ 0· 235·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
Name/description (proactive, WAL, working buffer, heartbeats) align with the included docs and scripts: the skill expects to read/write workspace files (SESSION-STATE.md, memory/*, USER.md, etc.) and to run a local security audit script. There are no unexpected external credentials or binaries requested. Minor inconsistency: some files (AGENTS.md, SOUL.md) contain statements like "Don't ask permission. Just do it." which conflict with other guardrails in the same bundle that require human approval for external or destructive actions.
Instruction Scope
SKILL.md and supporting docs instruct the agent to scan messages, write to SESSION-STATE.md and working-buffer files, run ./scripts/security-audit.sh, search memory, and 'try 5–10 approaches' including spawning agents and using tools. Those behaviours are coherent with a proactive-agent purpose, but several directives are ambiguous or contradictory: some places explicitly forbid external actions without approval, while others encourage acting without asking. That ambiguity grants broad discretionary power to the agent at runtime if the host does not enforce explicit action gating.
Install Mechanism
No install spec is present (instruction-only plus one audit script). No remote downloads or package installs. The included shell script is local and readable; it performs benign checks (file permissions, grep/scan) and does not download or execute remote code.
Credentials
The skill declares no required environment variables or credentials. The docs reference local config paths (e.g., .credentials/, $HOME/.clawdbot/clawdbot.json) and expect the agent to check them, which is plausible for a stateful agent. Because the skill doesn't request cloud credentials, there's no apparent overreach — but the agent will be instructed to read/write/work with local files that may contain secrets, so ensure those files are protected.
Persistence & Privilege
The skill is not marked always:true (good) and uses the normal autonomous invocation model. However, multiple places in the content encourage autonomous action without explicit user approval (e.g., 'Don't ask permission. Just do it.' and 'Ask forgiveness, not permission'), which increases risk when combined with default autonomous invocation. Platform-level gating for outbound actions and approvals is recommended before enabling this skill.
Scan Findings in Context
[prompt-injection-pattern:ignore-previous-instructions] expected: The phrase appears in the docs as an example of injection patterns to detect (references/security-patterns.md, HEARTBEAT.md). This is expected for a security-focused skill. Presence in docs is not itself malicious, but if such text appeared in runtime prompts as instructions to the agent, it would be dangerous.
[prompt-injection-pattern:you-are-now] expected: Included in the skill's security-patterns as an injection example. Expected in context of detection rules; not evidence of exploitation in the files themselves.
[prompt-injection-pattern:system-prompt-override] expected: The SKILL.md and references include 'system prompt' override examples to teach detection and defense. This explains the pre-scan detection; still, the evaluator should ensure the agent framework isolates system prompts and does not execute text from untrusted sources as instructions.
What to consider before installing
This bundle appears to implement a plausible proactive-agent architecture and includes a harmless local security-audit script, but there are contradictory directives about acting without permission and several places that instruct broad tool use (spawn agents, run searches, write files). Before installing or enabling autonomous invocation: 1) Inspect and remove or reword lines that say "Don't ask permission" or "Ask forgiveness, not permission" if you want strict gating; 2) Ensure your agent runtime enforces human approval for outbound actions (sending messages, deleting files, external network access); 3) Run the provided ./scripts/security-audit.sh in an isolated workspace to see what local secrets/configs it flags; 4) If you plan to let the skill write to local memory files, verify .credentials and other sensitive files are properly protected and gitignored; 5) Prefer enabling this skill in a sandboxed environment first and monitor any autonomous actions. If you want higher assurance, ask the maintainer for clarification on the ambiguous permission rules and an explicit list of actions that require user approval.assets/HEARTBEAT.md:11
Prompt-injection style instruction pattern detected.
references/security-patterns.md:9
Prompt-injection style instruction pattern detected.
SKILL-v2.3-backup.md:179
Prompt-injection style instruction pattern detected.
About static analysis
These patterns were detected by automated regex scanning. They may be normal for skills that integrate with external APIs. Check the VirusTotal and OpenClaw results above for context-aware analysis.Like a lobster shell, security has layers — review code before you run it.
latestvk974z171tb0bqr6ncav30e28vx82kfnm
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
