Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Opportunity Scout

v1.0.0

Hunt for real, expressed user pain points and unmet demand across Reddit, HN, and configurable sources. Finds demand signals like frustration posts, feature...

0· 91·0 current·0 all-time
byNew Age Investments@newageinvestments25-byte

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for newageinvestments25-byte/nai-opportunity-scout.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Opportunity Scout" (newageinvestments25-byte/nai-opportunity-scout) from ClawHub.
Skill page: https://clawhub.ai/newageinvestments25-byte/nai-opportunity-scout
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install nai-opportunity-scout

ClawHub CLI

Package manager switcher

npx clawhub@latest install nai-opportunity-scout
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description, config example, references, and the provided scripts (configure.py, digest.py, history.py) are coherent: they implement niche configuration, query generation/ingestion, scoring, history tracking, and digest generation for opportunity scanning across Reddit/HN/GitHub. Required env vars and binaries are none, which is consistent with an instruction-first, script-driven scanner that expects the host to provide a web_search tool.
Instruction Scope
SKILL.md keeps scope focused on scanning, scoring, and producing digests. It instructs the operator/agent to run the included scripts and to call a 'web_search' tool to execute queries and collect results. That implies network requests for query results (expected for this skill), and the skill writes config.json, history.json, and findings/ into its directory. Those behaviors are reasonable for the stated purpose, but any tool that performs web queries and persists scraped text should be inspected to confirm it only queries intended sources and does not transmit collected findings to unexpected endpoints.
Install Mechanism
No install spec is provided (instruction-only with shipped scripts). There is no download/remote install step, which reduces supply-chain risk. The skill is comprised of local Python scripts and static reference files.
Credentials
The skill declares no required environment variables, no primary credential, and no special config paths. That is proportionate for a tool that generates queries and consumes search results. The only external dependency implied by the docs is a 'web_search' tool the agent should call to fetch results — that should be a trusted capability of the host/agent.
Persistence & Privilege
always:false (default) — the skill does not request forced persistent inclusion. It writes its own config.json, history.json, and findings/ inside the skill directory, which is expected for a scanner. That said, writing scraped posts to disk may surface sensitive user content if the output directory is synced to cloud storage or otherwise exposed; consider choosing an isolated output path.
What to consider before installing
This skill appears to do what it says: configure niches, generate queries, ingest search results (via a host-provided web_search tool), score signals, and write a markdown digest and history files. Before installing or running it, do the following: 1) Inspect scripts/scan_sources.py and scripts/score_signals.py (and any other truncated/omitted files) for any hard-coded network endpoints, HTTP POST calls, or telemetry — those scripts perform the core network/processing work and were not fully shown. 2) Confirm what the agent's web_search tool does and which service it queries — ensure it's trusted and that queries/results won't be forwarded to untrusted third parties. 3) Choose an output directory that isn't automatically synced to cloud backups (e.g., avoid saving digests to a synced Obsidian vault if you care about scraped content privacy). 4) If you plan to run scheduled scans (cron), run initial scans manually and review generated findings to ensure the tool only collects intended public posts. If you can share the full contents of scan_sources.py and score_signals.py, I can re-evaluate with higher confidence.

Like a lobster shell, security has layers — review code before you run it.

latestvk97c2fr5cx7x6hdzd6ay4nf34983mfme
91downloads
0stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

Opportunity Scout

Hunt for real demand signals — not news, not trends, but people expressing pain, frustration, and unmet needs that represent building opportunities.

Skill Directory

All paths below are relative to this skill's directory.

  • scripts/configure.py — manage niches, keywords, sources, schedule
  • scripts/scan_sources.py — generate search queries and process results
  • scripts/score_signals.py — score and rank findings
  • scripts/digest.py — generate prioritized markdown digest
  • scripts/history.py — track signals over time, detect trends
  • references/signal-types.md — what counts as a demand signal (read when scoring)
  • references/source-guide.md — how to configure sources effectively
  • assets/config.example.json — example niche configurations

Data Files

All state lives in the skill directory:

  • config.json — active configuration (created by configure.py)
  • history.json — signal history log (created by history.py)
  • findings/ — raw and scored finding files per scan

Workflow

First-Time Setup

  1. Run configure.py --init to create config.json from the example, or:
    • configure.py --add-niche "AI tools for small business" --keywords "wish,need,looking for,alternative to,frustrated"
    • configure.py --add-source reddit:r/SaaS,reddit:r/smallbusiness,hackernews
    • configure.py --set-schedule daily

Running a Scan

Execute these steps in order:

  1. Generate queries: Run scan_sources.py --generate-queries to get optimized search queries. It prints JSON with query strings.

  2. Execute searches: For each query, call the web_search tool. Collect all results into a JSON array and save to a temp file.

  3. Ingest results: Run scan_sources.py --ingest <results.json> to parse raw search results into standardized findings. Outputs findings JSON.

  4. Score findings: Run score_signals.py <findings.json> to score each finding on signal strength, engagement, freshness, competition, and recurrence. Outputs scored JSON.

  5. Update history: Run history.py --update <scored.json> to log findings and detect trend patterns (persistent, emerging, fading).

  6. Generate digest: Run digest.py <scored.json> to produce the markdown report. Use --output <path> to save to a specific location (e.g., Obsidian vault). Use --max-results 20 to limit output.

Quick Scan (Single Command Summary)

For a rapid scan of a single niche without full config:

  1. Run scan_sources.py --quick "developer tools for AI agents" to get queries
  2. Execute web_search for each query
  3. Pipe results through score and digest

Reading References

  • Before scoring or evaluating signals manually, read references/signal-types.md for the taxonomy of demand signals and how to distinguish real demand from noise.
  • When helping users configure sources, read references/source-guide.md.

Cron Integration

Set schedule in config.json via configure.py --set-schedule daily|weekly. When triggered by cron, run the full scan workflow above. Save digest to the user's preferred output location (default: skill directory findings/).

Key Design Principles

  • Demand, not news: Every finding should express unmet need, frustration, or a gap. Filter aggressively — 10 strong signals beat 100 weak ones.
  • Batch queries: Combine niche + keywords into fewer, broader queries rather than one query per keyword. Respect rate limits.
  • Track over time: Signals that persist across scans are more valuable than one-offs. Use history.py to surface persistent demand and fading trends.
  • Score honestly: High engagement + low competition + recurring = strong opportunity. Don't inflate scores — the user needs signal, not noise.

Comments

Loading comments...