Back to skill
Skillv2.0.0

ClawScan security

starmemo · ClawHub's context-aware review of the artifact, metadata, and declared behavior.

Scanner verdict

SuspiciousMar 10, 2026, 6:15 AM
Verdict
suspicious
Confidence
medium
Model
gpt-5-mini
Summary
The skill generally matches a local memory/knowledge purpose, but there are multiple inconsistencies and surprising behaviors (unconditional autosave on every user input, runtime pip installs, config flags that aren't consistently enforced, and mixed storage paths) that mean you should review and configure it carefully before installing.
Guidance
Key things to consider before installing: - Autosave behavior: by default the skill's on_user_input will save every user message to disk (memory_data/daily). This may differ from the README/SKILL.md's description of selective/heuristic saving. If you expect selective saving, review/modify the on_user_input logic or configuration before enabling it in production. - Network calls and API keys: the skill will call LLM endpoints if you enable AI optimization and provide an API key. API keys can be persisted to a local .skill_config file; do not enable persist_key if you don't want keys written to disk. Also confirm that setting 'web' or 'allow_web_fetch' actually prevents network access in your deployment — the code does not consistently check this flag. - Runtime pip install: the modules auto-install the 'requests' package via pip at import time (subprocess). If you run in constrained or audited environments, prefer pre-installing dependencies or inspect that behavior. - Storage inconsistency: the repository contains two different storage paths (memory_data/ used by top-level starmemo.py and memory/ used by v2/storage). That can lead to scattered data across directories — search your filesystem after testing to find where data is stored. - Recommended mitigations: run the skill in an isolated or dev environment first; set enable_ai=false and persist_key=false before hooking into production; inspect and (if needed) edit on_user_input to enforce selective saving triggers; review .skill_config after first run; and avoid sending sensitive secrets to the skill until you confirm its behavior. - If you want more assurance: ask the author for clarification about the autosave policy, the intended use of allow_web_fetch, and why two storage paths are used; or request a small patch that respects a single 'save' policy and that makes network access explicit and gated.

Review Dimensions

Purpose & Capability
concernName/description (智能记忆/知识库) align with the code's features (local memory, knowledge extraction, LLM calls). However there are inconsistencies: top-level starmemo.py saves raw inputs into memory_data/daily unconditionally (global autosave), while v2/storage.py uses a different memory/ directory structure. The README and SKILL.md describe selective/heuristic saving, but the top-level on_user_input implements unconditional saving for all platforms by default — this deviates from the stated behavior and is disproportionate to a 'selective memory' expectation.
Instruction Scope
concernSKILL.md and README describe selective triggers and a '可控联网' option to disable network fetches. In practice: the skill registers an on_user_input hook that will save every user input by default (ctx.get('config', {}) falls back to save=True). Network LLM calls are gated by API key and enable_ai flags, but config flags like allow_web_fetch exist yet are not consistently checked before making LLM requests (LLMClient checks enable_ai and api_key, not allow_web_fetch). That means disabling 'web' in config may not reliably prevent network calls if enable_ai and api_key are present. The instructions and the runtime behavior therefore disagree on when/what gets saved and when network access occurs.
Install Mechanism
noteNo install spec in the registry but several modules (starmemo.py and v2/ai_processor.py) call an auto_install() that uses subprocess to pip install requests at import time. That performs network install and runs pip via subprocess during skill load — not ideal but limited to a single well-known package (requests). This is a moderate surprise for users expecting 'instruction-only' or no runtime installs.
Credentials
okThe skill does not request platform environment variables or external credentials at install time. LLM API keys are stored in a local .skill_config file (optionally persisted) rather than via declared env vars. That is proportionate for an LLM-backed memory tool, but note that API keys will be written to disk if 'persist_key' is enabled.
Persistence & Privilege
concernThe skill is not marked always:true, but _meta.json registers a hook on_user_input and the code's on_user_input will automatically save all user inputs by default. That gives the skill broad write access to local storage (it writes memory files and a .skill_config). Because it's invoked on every user message, its effective persistence and data collection surface are larger than the SKILL.md's selective-trigger description — this increases privacy risk.