Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Wechat Super Power
v0.1.2内容全流程助手:topic → 相关文章列表 → 知识库 → 爆点分析 → 框架 → 写作。 适用于围绕公众号/微信文章场景,从选题出发完成资料抓取、知识沉淀、观点提炼、 框架组织与写作推进。 触发关键词:公众号、微信文章、推文、选题、知识库、爆点分析、文章框架、写作。 不应被纯 blog、邮件、PPT、短视频脚...
⭐ 1· 72·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
Name/description (WeChat article pipeline) align with code and instructions: scripts implement searching, resolving Sogou redirects, fetching mp.weixin.qq.com articles, converting to Markdown, and building a local knowledge-base. The SKILL.md runtime workflow matches the scripts (Step 1-3 scripted, Step 4-6 prompt-driven).
Instruction Scope
SKILL.md directs the agent to run local CLI scripts that perform network requests and to read/write a local ./knowledge-base/<topic>/ directory. That scope is appropriate for the stated purpose, but the included references/writing-guide.md contains detailed '反检测' (anti-detection) rules and explicit techniques to make generated text evade automated AI detectors — this is ethically sensitive and may be used to intentionally deceive detection systems. Also the scripts will fetch arbitrary URLs provided by the user (save_web_articles supports generic web pages), so they will make outbound network requests and write fetched content to disk.
Install Mechanism
Instruction-only skill with no install spec. Code is shipped in the skill bundle (JS scripts). There is no external download/install step or third-party package install that would introduce elevated supply-chain risk.
Credentials
No environment variables, no declared credentials, and no special config paths are requested. The scripts perform network I/O and filesystem writes to a local ./knowledge-base (expected and proportional to purpose).
Persistence & Privilege
always is false and the skill does not request permanent system-wide privileges. It writes files only into topic-specific knowledge-base directories and manifest files within the skill working tree — expected for a scraping/KB tool. The skill can be invoked autonomously by the agent (default), which is normal; combined with network fetch capability and anti-detection instructions this increases potential misuse risk.
Scan Findings in Context
[no_findings] expected: Static pre-scan reported no suspicious regex hits. The code contains network fetch, redirect resolution, user-agent rotation, and file write operations — these are expected for a web-scraping skill.
What to consider before installing
This skill appears to do what it says: search WeChat/Sogou and save articles into a local knowledge-base, then help the agent produce article frameworks and drafts. Things to consider before installing:
- Network and legal risk: running the scripts will make outbound HTTP(S) requests (including to mp.weixin.qq.com and weixin.sogou.com) and try to work around redirect/antispider behavior. Ensure this complies with target site terms of service and local law.
- Ethical concern: references/writing-guide.md contains detailed anti-detection techniques intended to make AI-generated articles look human. If you have policies against evading detection tools (company, platform, or legal), do not use those parts or remove them.
- Data exposure: scripts fetch and save web content to disk under ./knowledge-base/<topic>/. Do not point the skill at internal or sensitive URLs. Review the code if you plan to run it in an environment with access to sensitive networks or data.
- Operational safety: if you will allow autonomous invocation, remember the agent could run the fetch/build scripts and perform many outbound requests and file writes without further prompts. If that is not acceptable, restrict invocation or require explicit user confirmation before running scripts.
Recommended actions: review and, if necessary, remove or edit the anti-detection sections in references/writing-guide.md; run the scripts in an isolated environment (sandbox/container) the first time; avoid supplying sensitive/internal URLs; and confirm compliance with target site scraping policies.Like a lobster shell, security has layers — review code before you run it.
latestvk974ngwjnvmb6gmkpmv5qz6kc584bc5n
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
