Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Opportunity Scout
v1.0.0Hunt for real, expressed user pain points and unmet demand across Reddit, HN, and configurable sources. Finds demand signals like frustration posts, feature...
⭐ 0· 47·0 current·0 all-time
byNew Age Investments@newageinvestments25-byte
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
Name/description, config example, references, and the provided scripts (configure.py, digest.py, history.py) are coherent: they implement niche configuration, query generation/ingestion, scoring, history tracking, and digest generation for opportunity scanning across Reddit/HN/GitHub. Required env vars and binaries are none, which is consistent with an instruction-first, script-driven scanner that expects the host to provide a web_search tool.
Instruction Scope
SKILL.md keeps scope focused on scanning, scoring, and producing digests. It instructs the operator/agent to run the included scripts and to call a 'web_search' tool to execute queries and collect results. That implies network requests for query results (expected for this skill), and the skill writes config.json, history.json, and findings/ into its directory. Those behaviors are reasonable for the stated purpose, but any tool that performs web queries and persists scraped text should be inspected to confirm it only queries intended sources and does not transmit collected findings to unexpected endpoints.
Install Mechanism
No install spec is provided (instruction-only with shipped scripts). There is no download/remote install step, which reduces supply-chain risk. The skill is comprised of local Python scripts and static reference files.
Credentials
The skill declares no required environment variables, no primary credential, and no special config paths. That is proportionate for a tool that generates queries and consumes search results. The only external dependency implied by the docs is a 'web_search' tool the agent should call to fetch results — that should be a trusted capability of the host/agent.
Persistence & Privilege
always:false (default) — the skill does not request forced persistent inclusion. It writes its own config.json, history.json, and findings/ inside the skill directory, which is expected for a scanner. That said, writing scraped posts to disk may surface sensitive user content if the output directory is synced to cloud storage or otherwise exposed; consider choosing an isolated output path.
What to consider before installing
This skill appears to do what it says: configure niches, generate queries, ingest search results (via a host-provided web_search tool), score signals, and write a markdown digest and history files. Before installing or running it, do the following: 1) Inspect scripts/scan_sources.py and scripts/score_signals.py (and any other truncated/omitted files) for any hard-coded network endpoints, HTTP POST calls, or telemetry — those scripts perform the core network/processing work and were not fully shown. 2) Confirm what the agent's web_search tool does and which service it queries — ensure it's trusted and that queries/results won't be forwarded to untrusted third parties. 3) Choose an output directory that isn't automatically synced to cloud backups (e.g., avoid saving digests to a synced Obsidian vault if you care about scraped content privacy). 4) If you plan to run scheduled scans (cron), run initial scans manually and review generated findings to ensure the tool only collects intended public posts. If you can share the full contents of scan_sources.py and score_signals.py, I can re-evaluate with higher confidence.Like a lobster shell, security has layers — review code before you run it.
latestvk97c2fr5cx7x6hdzd6ay4nf34983mfme
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
