Lead Hunter

v1.0.0

Autonomous lead generation skill. Finds freshly-funded companies matching your ideal customer profile, researches them, and delivers qualified leads with per...

0· 144·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for zich-agent/agitech-lead-hunter.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Lead Hunter" (zich-agent/agitech-lead-hunter) from ClawHub.
Skill page: https://clawhub.ai/zich-agent/agitech-lead-hunter
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install agitech-lead-hunter

ClawHub CLI

Package manager switcher

npx clawhub@latest install agitech-lead-hunter
Security Scan
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
The name/description (autonomous lead generation) matches the SKILL.md and included files. The skill scrapes funding/news sources, filters leads by ICP, researches companies, scores them, and writes outputs to markdown/CSV/Asana as documented — the single Python scraper file and config.json are directly relevant.
Instruction Scope
Instructions explicitly read/write only skill-local files (scripts/config.json, scripts/seen.json, leads/, memory/). The agent is instructed to use platform tools (web_fetch, web_search, managed browser) and the local scraper as a fallback. Onboarding offers to create a cron job automatically — this is outside the skill directory and should be confirmed with the user before creation. The skill also references an Asana helper command (node skills/asana-pat/...), which relies on another skill or user-supplied Asana PAT if Asana output is chosen.
Install Mechanism
No formal install spec in registry, but scripts/scrape.py will auto-create a Python venv and run pip to install crawl4ai and Playwright (and will download Chromium). This is standard for a scraper but means the skill will download and install third‑party packages and a browser binary into the skill folder on first run — review/approve this action before executing.
Credentials
The skill does not request environment variables or credentials in the registry metadata. Asana integration is optional and would require the user to supply an Asana PAT externally; nothing in the skill silently exfiltrates secrets. The amount and type of access requested (local file reads/writes, network for scraping) are proportional to the stated function.
Persistence & Privilege
The skill is not forced always-on and does not declare elevated platform privileges, but onboarding allows creating a cron job for scheduled runs and the scraper will write a venv and state files under the skill directory. Creating system cron entries is meaningful persistence and should be user-approved.
Assessment
This skill appears coherent with its stated purpose, but take these precautions before installing or running it: 1) Inspect scripts/scrape.py and scripts/config.json yourself (or run in a sandbox) — the scraper will create a .venv, pip-install crawl4ai and Playwright, and download Chromium. 2) Start with output type=markdown so no external service credentials are needed; only supply an Asana PAT if you trust the code and want Asana output. 3) Be cautious when allowing the agent to create a cron job — confirm scheduling actions explicitly. 4) Review and limit the source list (config.sources) to trusted sites to avoid excessive scraping or potential TOS issues (LinkedIn scraping may violate some sites' terms). 5) Run python3 scripts/scrape.py --check first to see what will be installed, and consider running the skill in an isolated environment if you have security concerns.

Like a lobster shell, security has layers — review code before you run it.

latestvk9786tn7a7vp4rkwepmgzws4ss83b6yg
144downloads
0stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

Lead Hunter

Autonomous lead generation that finds, researches, and qualifies prospects daily.

First Run (Onboarding)

If skills/lead-hunter/scripts/config.json has "configured": false, run the onboarding interview before anything else. See references/onboarding.md for the full interview flow.

After onboarding, the config is written and the skill switches to hunt mode.

Hunt Mode (Daily Run)

Step 1: Load Config

Read skills/lead-hunter/scripts/config.json for:

  • company - who you are and what you sell
  • ideal_customer - size, stage, geography, signals
  • sources - where to find leads (industry-specific)
  • output - where to put leads (asana, notion, csv, markdown)
  • outreach - DM template and personalization rules
  • filters - what to skip

Step 2: Scrape Sources

For each source in config.sources:

  1. Try web_fetch first (fastest, no deps)
  2. If blocked (403/Cloudflare): fall back to scripts/scrape.py which uses Crawl4AI with stealth mode
  3. If still blocked: use OpenClaw's managed browser via the browser tool
  4. Last resort: use web_search with site:<domain> + freshness filter

Extract from each source:

  • Company name
  • Funding amount and round type
  • Location
  • What they do (1-2 sentences)
  • Investors (if available)
  • Article/announcement URL

Step 3: Filter

Apply config.filters and config.ideal_customer to keep only matching leads:

  • Round type matches (e.g., pre-seed, seed)
  • Amount in range (e.g., $500K-$10M)
  • Geography matches
  • Industry/vertical matches
  • Not in config.filters.skip_industries

Also deduplicate against scripts/seen.json (persisted list of previously found companies).

Step 4: Research Each Lead

For each qualifying company (max 5 per run to stay fast):

  1. Website: web_fetch their site - check team page, product, tech stack
  2. Team size: web_search for LinkedIn company page - estimate headcount
  3. Key person: web_search for founder/CEO LinkedIn - get name, background, LinkedIn URL
  4. Opportunity signals: Flag if no CTO, small team, early product, tech stack match

Step 5: Score & Rank

Score each lead 1-10 based on:

  • Team size match (smaller = higher for services, bigger = higher for SaaS)
  • Funding stage match
  • Tech stack alignment
  • Opportunity signals (no CTO, hiring, etc.)
  • Recency of funding announcement

Step 6: Generate Outreach

For each lead scoring 6+, generate a personalized DM draft using config.outreach.template with:

  • Founder's first name
  • Specific observation about their product/company
  • How you can help (from config.company.value_prop)
  • Soft CTA

Step 7: Output

Depending on config.output.type:

asana:

node skills/asana-pat/scripts/asana.mjs create-task \
  --workspace <workspace_id> \
  --parent <parent_task_id> \
  --assignee me \
  --name "Lead: <Company> - <Round> <Amount>" \
  --notes "<full research + DM draft>"

markdown: Append to leads/YYYY-MM-DD.md with full details per lead.

csv: Append row to leads/leads.csv with: date, company, round, amount, location, url, key_person, linkedin, score, dm_draft

notion: (future - document API integration needed)

Step 8: Update State

  • Add found companies to scripts/seen.json for dedup
  • Log summary to memory/YYYY-MM-DD.md

Step 9: Report

Output structured summary:

## Lead Hunter Report - YYYY-MM-DD
- Sources scraped: X
- Articles found: X
- After filtering: X leads
- Researched: X
- Qualified (score 6+): X

### Top Leads
1. **Company** - Round $Amount | Score: X/10
   Key person: Name (LinkedIn)
   Signal: [why they're a fit]

Scraping Fallback Chain

The skill uses a tiered approach to handle anti-bot protection:

  1. web_fetch - default, fastest
  2. scripts/scrape.py - Crawl4AI with stealth (handles most Cloudflare)
  3. Browser tool - OpenClaw's managed browser (handles everything but slow)
  4. web_search site: query - last resort, gets snippets not full pages

The scrape script auto-manages a venv at scripts/.venv/. First run:

python3 skills/lead-hunter/scripts/scrape.py --check

This creates the venv, installs crawl4ai + playwright chromium. Subsequent runs are instant.

Source Discovery

When the user picks an industry during onboarding, the skill suggests relevant lead sources. See references/sources.md for the industry-to-source mapping.

Users can add custom sources at any time by editing config.sources in config.json.

Rules

  • Never send DMs automatically - only draft them
  • Max 5 fully-researched leads per run (quality > quantity)
  • Always deduplicate against seen.json
  • Log every run to daily memory
  • If a source is consistently blocked, note it in the report so the user can adjust

Comments

Loading comments...