Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Memory Strategy
v0.1.0当用户需要管理对话记忆时应使用此skill。 触发条件包括: - 用户说"记下来"、"记住这个"、"别忘了"、"永久保存"、"这是一个重点" - 用户查询历史信息:"之前是怎么做的"、"查找关于...的记录"、"回忆一下..." - 会话结束需要自动整理归档 - 需要评估信息重要性并决定存储位置 - 需要更新记忆...
⭐ 0· 191·0 current·0 all-time
by@hugoiku
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
high confidencePurpose & Capability
The name/description (manage conversation memory, short/long-term storage, decay, scoring, silent archival) is consistent with the SKILL.md content: creating a .memory tree, scoring entries, and performing retrieval. However the manifest declares no code or env requirements while the instructions expect helper scripts and an external Kimi API. Also the design explicitly includes contacts.md described as '相关API密钥', which is unexpected for a general memory manager and suggests storing secrets in plain memory files—this does not align cleanly with a benign memory-management purpose.
Instruction Scope
SKILL.md instructs agents to (a) create .memory in project root and write logs/indexes, (b) run scripts (evaluate_importance.py, silent_agent.py, retrieve_memory.py, update_index.py) to extract and auto-write conversation contents, and (c) use an external Kimi API if available. Yet the skill bundle contains no scripts or code and no explicit boundaries for what is extracted/written. It explicitly lists contacts.md as holding API keys. The instructions therefore ask the agent to read, extract, and persist potentially sensitive conversation data and credentials, with vague controls and no provided tooling — scope creep and privacy risk.
Install Mechanism
Instruction-only skill with no install spec and no code files; nothing will be automatically downloaded or executed by the installer. This is the lowest-risk install mechanism. The remaining risk comes from the runtime instructions the agent would follow.
Credentials
The SKILL.md references an external KIMI_API_KEY and recommends using a Kimi API for automatic scoring, but the skill manifest declares no required environment variables. There is also an explicit recommendation to store '相关API密钥' in long-term contacts.md. Requesting or encouraging storage of API keys/credentials in plaintext project files is disproportionate and insecure for a general memory-management skill.
Persistence & Privilege
always:false (good). The skill's Silent Agent concept instructs automatic extraction and writing of session content at session end or after timeout, which gives it potential persistent presence on disk (creates .memory and writes logs/indexes). While not requesting platform-level persistence privileges, this behavior has privacy implications because it writes conversation content and may persist secrets unless the user explicitly configures otherwise.
What to consider before installing
This skill is suspicious for several concrete reasons you should resolve before installing/using it:
- Missing artifacts: The SKILL.md references multiple helper scripts (evaluate_importance.py, silent_agent.py, retrieve_memory.py, update_index.py) and an assets/references directory, but the published skill contains no code files. Ask the publisher to provide the scripts or explain how the agent is expected to run them. Running arbitrary, unspecified scripts is risky.
- Undeclared external credential: The docs recommend using a Kimi API (KIMI_API_KEY) for scoring but the skill manifest does not declare any required env vars. If you intend to supply such a key, require confirmation about where/how it will be used and stored.
- Secrets storage risk: The design explicitly lists contacts.md in long-term memory and calls it '相关API密钥' (related API keys). That suggests storing API keys/credentials in plaintext within .memory/long-term — this is insecure. Do not store secrets in plain files. Prefer a secure secret/vault, or at minimum require guidance to encrypt or restrict permissions and add .memory to .gitignore.
- Automatic write behavior: Silent Agent will extract and write conversation content at session end or after a timeout. Decide whether you want the agent to auto-write and where files are stored. If you keep it, restrict directory location, set strict file permissions, and avoid saving sensitive conversations or credentials.
- Next steps to reduce risk before using:
1) Request the missing scripts and review their code (or ask for a manifest that lists the exact operations the agent will run).
2) Ask the author to declare required env vars (e.g., KIMI_API_KEY) in the manifest and justify them.
3) Remove any guidance to store API keys in contacts.md; implement secret-storage best practices (vault, encrypted files, or OS keyring) instead.
4) Configure .memory to be outside source control (add to .gitignore), set restrictive file permissions, and audit contents regularly.
5) If you cannot validate the missing scripts and secret-handling behavior, avoid enabling automatic Silent Agent writes and prefer manual review of any extracted content.
Because these inconsistencies combine privacy and credential-handling risks, treat this skill as suspicious until the publisher provides the missing code and clear, secure handling of credentials and persistent storage.Like a lobster shell, security has layers — review code before you run it.
latestvk979v0ea301wh42jtq9bgy30sd82y5n1
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
