Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Chapter Outliner
v3.0.0章节大纲生成器 - 基于 15 节拍系统生成小说章节大纲。当需要根据故事大纲和章节号创建详细写作大纲时使用,支持字数分配、角色参考、风格注入。
⭐ 0· 114·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
high confidencePurpose & Capability
The stated purpose (generate chapter outlines from local project files) matches the pure-Python local script (scripts/generate_outline.py). However, the repository also includes an LLM-backed script (scripts/generate_outline_llm.py) that calls an external LLM endpoint — this networked capability is not documented in SKILL.md nor declared in the skill metadata and therefore is disproportionate to the advertised local-outliner purpose.
Instruction Scope
SKILL.md describes local CLI usage and only lists Python, rich, and PyYAML as dependencies. It does not mention invoking a remote LLM, nor does it warn that outline.md, style.yml, characters/*.yml and chapters/index.yml will be sent to an external API. generate_outline_llm.py explicitly builds a prompt containing story, style, and character data and posts it to a remote service — that is out-of-scope relative to the SKILL.md instructions and gives the skill broad discretion to transmit user project data.
Install Mechanism
There is no install spec (instruction-only), so nothing is downloaded at install time. That lowers install risk. However, scripts include a runtime network call; also scripts/requirements.txt omits the 'requests' package (which the LLM script uses), indicating sloppy packaging/documentation that could cause surprises at runtime.
Credentials
Skill metadata declares no required environment variables, but generate_outline_llm.py requires DASHSCOPE_API_KEY (read from environment) to call the remote LLM. Requesting an API key for an external service is reasonable for an LLM-backed variant — but omitting this from the declared requirements and SKILL.md is a red flag. In addition, the code will transmit user project content to the external endpoint, meaning that providing that API key and running the LLM script grants the remote service access to potentially sensitive project files.
Persistence & Privilege
The skill does not request persistent privileges (always is false), does not modify other skills' configuration, and does not require system-wide config changes. It runs as a user-invoked CLI, so persistence/privilege concerns are low by themselves.
Scan Findings in Context
[uses-external-llm-endpoint] unexpected: generate_outline_llm.py posts to https://coding.dashscope.aliyuncs.com/v1/chat/completions. The SKILL.md and metadata do not declare any remote LLM usage or external endpoints.
[reads-env-DASHSCOPE_API_KEY] unexpected: The LLM script requires DASHSCOPE_API_KEY from the environment but the skill metadata declares no required env vars — this is a missing/undeclared secret requirement.
[network-call-requests-post] unexpected: The code uses requests.post to send prompt+project data to the remote service; sending project files externally is not described in SKILL.md and may constitute data exfiltration risk depending on user data sensitivity.
[missing-requirements-requests] unexpected: scripts/requirements.txt lists 'rich' and 'PyYAML' but omits 'requests', which is required by the LLM script. This mismatch suggests packaging/documentation issues.
What to consider before installing
This skill includes two code paths: a local generator (scripts/generate_outline.py) that operates on your local project files and is coherent with the README, and an LLM-backed script (scripts/generate_outline_llm.py) that will send your outline, style, and character data to a remote service (coding.dashscope.aliyuncs.com) and requires an environment variable DASHSCOPE_API_KEY. Before installing or running: 1) Treat the LLM script as an opt-in networked feature — do not run it unless you trust the remote service. 2) Ask the skill author to update SKILL.md and metadata to declare the DASHSCOPE_API_KEY requirement and to explain what data is sent, retention policy, and which model/service is used. 3) If you only want offline/local usage, use scripts/generate_outline.py and avoid running the LLM script; consider removing or sandboxing generate_outline_llm.py. 4) Run in an isolated environment (no access to sensitive files or secret env vars) if you must test the LLM script. 5) Verify and fix requirements.txt (add 'requests') and review network calls in the code. If the author provides a clear explanation and explicitly documents the API key and data handling, the concern could be resolved; absent that, do not supply sensitive project content or API credentials.Like a lobster shell, security has layers — review code before you run it.
latestvk97b32nd9b49j9xtb84yq8c4h984vxnm
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
