Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Competitor Radar
v1.0.0竞品动态监控雷达。自动抓取竞品博客RSS、GitHub Release、HackerNews讨论,用AI评分筛选重要动态,生成结构化报告。当需要了解竞品最新动向、监控行业变化时使用。
⭐ 0· 265·0 current·1 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
high confidencePurpose & Capability
The stated purpose (monitor blogs, GitHub, HackerNews and produce reports) matches the code's fetching and reporting behavior, and requiring python3 is reasonable. However, the code embeds a hard-coded LLM API key and a local LLM endpoint (http://127.0.0.1:18790) directly in the scripts instead of using a declared/optional environment variable. Embedding a key in the code is disproportionate to the stated purpose and not documented in SKILL.md or requires.env.
Instruction Scope
SKILL.md only instructs running radar.py with an optional config and --no-ai, but the runtime code will call external services (GitHub API, hn.algolia, blogs) and a local LLM endpoint using a hard-coded API key. The instructions do not mention the LLM endpoint, the embedded API key, or optional env vars (e.g., GITHUB_TOKEN), so the runtime behavior is under-documented and gives the skill more network capability than the instructions disclose.
Install Mechanism
No install spec; the skill is instruction-and-code-only and only requires python3 on PATH. This is low install risk because nothing is downloaded during install.
Credentials
The declared metadata lists no required environment variables, but the code optionally reads GITHUB_TOKEN and unambiguously contains a hard-coded LLM API key and endpoint in both radar.py and _write_radar.py. Requiring or shipping credentials in-code is not proportional: credentials should be optional and provided via environment variables or config, and any required tokens should be declared in the skill metadata.
Persistence & Privilege
always is false and there are no install hooks or modifications to other skills or system-wide settings. The skill can be invoked by the agent autonomously (default), which is expected for skills; that alone is not a concern here.
What to consider before installing
Do not install or run this skill without review. The code contains a hard-coded LLM API key and local LLM endpoint (embedded secret) and also uses an optional GITHUB_TOKEN environment variable that is not documented. This is suspicious because secrets should not be hard-coded in distributed code. Before using: (1) inspect radar.py and _write_radar.py yourself (or with a dev) and remove any embedded API keys, replacing them with environment-configured values; (2) supply your own LLM endpoint/key via environment variables or local config and confirm the endpoint is trusted; (3) be aware the script will make network requests to RSS feeds, api.github.com, hn.algolia.com and to the configured LLM endpoint; (4) if you do not control or recognize the embedded key, treat it as potentially compromised and do not expose sensitive data through the skill; (5) prefer running it in an isolated environment (non-privileged user, network-restricted) until you have sanitized the code. If you want, provide the full untruncated radar.py/_write_radar.py and I can point to the exact lines to change.Like a lobster shell, security has layers — review code before you run it.
latestvk978ve6pv01drfdc1q6209a5vn82mq1a
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
Runtime requirements
🎯 Clawdis
Binspython3
