Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
Name/description match functionality: summarization and project review. The code and SKILL.md legitimately request optional LLM API keys (zhipu/openai/anthropic/etc.) for enhanced behavior. However, the packaged code imports UniversalLLMClient from a non-included module (llm_client) and the install/usage text references ../llm_config.py — indicating the skill expects external components not bundled with the skill. That dependency mismatch is unexpected and reduces coherence.
Instruction Scope
SKILL.md and code instruct creating ~/.ai_llm_config.json and possibly running ../llm_config.py; summary_review_llm.py writes a sqlite DB to the user's home (~/.ai_summary.db). The code does not reference unrelated system files, but the import of an external UniversalLLMClient means runtime behavior (network endpoints, telemetry, how API keys are used) is delegated to that external component. Because that client implementation is not included, the skill's runtime scope is underspecified and could permit unreviewed network transmissions of data or credentials.
Install Mechanism
No remote downloads or obscure installers. install.sh uses pip to install public packages (openai, optionally anthropic) and creates a run.sh wrapper. requirements.txt lists public packages. This is a low-risk install approach, but it does install network-capable SDKs which will use any API keys provided.
Credentials
The skill declares no required env vars and lists several optional API keys (ZHIPU/OpenAI/ANTHROPIC/DEEPSEEK/DASHSCOPE/MOONSHOT) which are consistent with multi-LLM support. Requesting multiple model API keys is reasonable here, but those are sensitive secrets; the package will attempt to use whatever provider keys are present, and because the UniversalLLMClient implementation is not included, it's unclear exactly how keys are stored/used/transmitted.
Persistence & Privilege
The skill is not always-enabled, is user-invocable, and doesn't request elevated privileges. It does create and use a local sqlite DB at ~/.ai_summary.db and a config file at ~/.ai_llm_config.json — expected for a summarization tool but worth noting as persistent data written to the user's home directory.
What to consider before installing
This skill appears to implement summarization and project review as advertised, but it depends on external components that are not included in the package: specifically llm_client (UniversalLLMClient) and references to ../llm_config.py. Before installing or running it, inspect or obtain the implementation of llm_client and llm_config.py to verify where requests go and how API keys are handled. Treat any API keys you provide as sensitive: prefer creating limited-scope/test keys, run the skill in an isolated environment, and monitor network activity. If you cannot review the external client, consider this incoherent dependency a risk and avoid providing real credentials or running it with sensitive data.Like a lobster shell, security has layers — review code before you run it.
latestvk975e0wq8yfn5hyfh55d5wm7hd82zkr3
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
