Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Smart Agent Workflow
v1.2.0AI Agent 工作方法论 Skill — 专注 3高(高质量+高效率+高节省)。提供任务类型判断、WBS 拆分、P0/P1 分级汇报、安全检查、Context 管理。渠道无关,适用于 Claude Code、Cursor、Codex、OpenClaw 等任何 AI agent。唯一提供完整工作方法论的 Skill。
⭐ 1· 72·0 current·0 all-time
byMark@whhaijun
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
Name/description (Smart Agent Workflow) align with the contents: extensive process docs, WBS/reporting templates, context management and scripts, and a memory_manager module. The code and scripts (compress, archive, safe write, metrics, memory manager) are coherent with a workflow/memory management skill.
Instruction Scope
SKILL.md explicitly instructs operators to add '读取 ~/smart-agent/AGENTS.md 并遵守所有规范' into agent system prompts (e.g., CLAUDE.md / Rules for AI). The repo also contains scripts that read/write local files and a memory_manager that saves history/memory and calls an ai_client to compress memory. These behaviors are within the declared purpose but grant the skill the ability to (a) inject long-running directives into an agent's system prompt (prompt-influence), (b) read and persist user context and preferences locally (which may include sensitive info if present), and (c) run background compression using an ai_client. The SKILL.md is not malicious on its face, but the instruction to modify a system prompt is a high-impact action and can be abused; verify what you allow the agent to load and persist.
Install Mechanism
There is no install spec (instruction-only), which keeps risk lower. The included docs show optional external install commands (e.g., curl | sh for Ollama in docs) and recommend cloning a GitHub repo; these are user-run actions (not automatic), but executing external install scripts has general supply-chain risk and should be audited before running.
Credentials
The skill declares no required env vars or credentials, which matches its documentation. Some docs show optional environment variables (TELEGRAM_BOT_TOKEN, OLLAMA_* ) for optional integrations, but those are not required by the skill itself. The memory manager will persist local files under the storage directory it is given — no cloud credentials are requested by the package, so requested privileges are proportional to the stated functionality; still review whether stored memory could contain secrets.
Persistence & Privilege
always:false and no special platform-wide privileges are requested. The skill expects to persist and manage local files (memory/hot.md, logs, metrics, reports) and to run helper scripts (safe_write.sh, compress_hot.sh). Persisting agent policy material into an agent's system prompt (via asking you to add AGENTS.md to CLAUDE.md) is powerful: it gives the skill long-term influence over agent behavior. This is consistent with its purpose but raises the blast radius if the content contains unsafe instructions or if the memory stores sensitive data.
Scan Findings in Context
[system-prompt-override] expected: The SKILL.md and README explicitly instruct operators to add the AGENTS.md rules into the agent's system prompt (e.g., "读取 ~/smart-agent/AGENTS.md 并遵守所有规范"). That is expected for a workflow/policy skill (it needs to enforce its rules), but it is also a prompt-injection vector — granting persistent directive power over the agent. Treat this as high-impact and review AGENTS.md before applying.
What to consider before installing
This skill appears to implement a coherent "agent workflow + memory" system, but it has two things you should check before installing or enabling it:
1) Prompt / policy injection: The SKILL.md recommends adding ~/smart-agent/AGENTS.md into an agent's system prompt (CLAUDE.md / Rules for AI). That gives the repository persistent control over agent behavior. Do NOT add it blindly — first read the AGENTS.md and related docs and decide if you trust these rules. Prefer adding rules to a sandboxed agent or enabling them only per-session rather than globally.
2) Local persistence & privacy: integrations/memory_manager.py writes history and memory files and will call an ai_client to compress/summarize conversations. Confirm (a) where those files will be stored, (b) whether they may contain secrets or PII, and (c) which AI client/endpoints will be used for compression (local-only vs remote). If you keep sensitive data in your agent environment, consider disabling automated compression or ensuring the ai_client is local and cannot send data to external services.
Additional practical steps:
- Inspect scripts (safe_write.sh, compress_hot.sh, archive_logs.sh, generate_report.sh) before running them. They operate on the local filesystem and are used by the skill.
- Avoid running curl | sh install commands from documentation unless you audit them (docs suggest running Ollama installer).
- If you want to limit risk, use the skill only as user-invoked (do not allow autonomous invocation to modify system prompts or write persistent files), or run it in an isolated/sandboxed agent account with no access to production secrets.
If you want, I can list the exact files/lines that cause the highest-risk behaviors (the AGENTS.md prompt-injection instruction, memory_manager's compress routine, and the docs' external-install commands) so you can review them line-by-line.Like a lobster shell, security has layers — review code before you run it.
latestvk9735wttw3n6asrc08tddn09qd83rea1
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
