Back to skill
Skillv1.1.0
ClawScan security
Lead Enrichment · ClawHub's context-aware review of the artifact, metadata, and declared behavior.
Scanner verdict
SuspiciousFeb 11, 2026, 8:42 PM
- Verdict
- suspicious
- Confidence
- medium
- Model
- gpt-5-mini
- Summary
- The skill broadly matches a lead-enrichment purpose, but there are unexplained gaps (undocumented secret handling and LLM/premium integrations) and minor mismatches you should verify before installing.
- Guidance
- What to check before installing: - Inspect ~/.config/lead-enrichment/config.json after running setup; the skill will create that directory and files in your home directory. - The skill will optionally look for premium API keys in ~/.clawdbot/secrets.env (Hunter, Clearbit, Apollo) and uses an LLM (Claude) for talking points. These credentials are not declared in the registry metadata — only store keys you trust and expect to use. Consider storing them securely (not world-readable). - The included scripts are mock/demonstration stubs: they do not perform actual scraping, but the SKILL.md implies the real implementation would use a browser/web fetcher and an LLM. If you enable a production implementation, review what network endpoints it calls and whether it respects robots.txt and rate limits. - Batch mode will read user-provided input files and call the enrich script; be mindful of the data you feed it (PII) and where outputs are stored or exported (exports can be piped to arbitrary webhooks/CRMs). - If you plan to enable premium sources or automatic CRM posting, test in an isolated environment and audit outbound network calls and logs first. - If the provenance/source of this skill is unknown, prefer caution: either request a verifiable source or run it locally in a controlled environment before granting any secret keys or enabling automatic/autonomous pipelines.
Review Dimensions
- Purpose & Capability
- noteThe name/description match the delivered artifacts: scripts, config, and instructions for scraping public sources and producing CRM-ready outputs. The SKILL.md also declares a dependency on the browser skill (reasonable for web scraping). However, some capabilities (Claude LLM for talking points; premium data sources like Hunter/Clearbit/Apollo) are referenced in the config but not declared as required environment credentials in the registry metadata, which is a mild mismatch.
- Instruction Scope
- noteRuntime instructions stay within the stated purpose (load config, search public sources, aggregate profiles, generate talking points). The included scripts are mostly mock/demo implementations but do instruct reading/writing to ~/.config/lead-enrichment and optionally checking ~/.clawdbot/secrets.env for premium API keys. There are no hidden external endpoints in the delivered scripts, but the SKILL.md and config imply web fetch/browser activity and an LLM (Claude) that would perform network access if implemented.
- Install Mechanism
- okNo install spec is provided (instruction-only), so no remote code downloads occur during install. The bundle includes local scripts; setup.sh will create ~/.config/lead-enrichment and copy config.example.json there. This is expected for a CLI-style skill and not disproportionate.
- Credentials
- concernThe registry lists no required env vars, but config.example.json and setup.sh reference premium API keys in ~/.clawdbot/secrets.env (HUNTER_API_KEY, CLEARBIT_API_KEY, APOLLO_API_KEY) and the talking_points feature notes it "requires Claude." Those credentials are optional, but the skill expects them to be present in a home-directory secrets file rather than declaring them as platform-provided env inputs. This is a transparency gap you should understand before enabling premium features.
- Persistence & Privilege
- okThe skill does not request always:true, does not modify other skills, and only writes its own config/data under ~/.config/lead-enrichment. Writing local config/cache is normal for this type of tool.
