Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Prospect Enrichment

v1.0.0

Enriches prospect and company profiles by scraping their website and searching for additional context to build comprehensive profiles. Use when the user want...

0· 205·0 current·0 all-time
byMario Karras@mariokarras

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for mariokarras/abm-prospect-enrichment.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Prospect Enrichment" (mariokarras/abm-prospect-enrichment) from ClawHub.
Skill page: https://clawhub.ai/mariokarras/abm-prospect-enrichment
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install abm-prospect-enrichment

ClawHub CLI

Package manager switcher

npx clawhub@latest install abm-prospect-enrichment
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The described goal (scrape a target site and run web searches to build a profile) is consistent with the SKILL.md workflow. However, the instructions rely on local CLIs (node tools/clis/firecrawl.js and node tools/clis/exa.js) and reference alternative skills, yet the skill declares no required binaries, no install steps, and no primary credential. The lack of declared tooling is an incoherence: if those CLIs are required, they should be listed or included.
!
Instruction Scope
Instructions explicitly direct the agent to run local scripts (node tools/clis/firecrawl.js and node tools/clis/exa.js) and to read product-marketing-context files if present (.agents/product-marketing-context.md or .claude/product-marketing-context.md). Running local node scripts can execute arbitrary code from the workspace; reading local context files means the skill will access repository or agent-local files beyond the target website. Both behaviors expand the skill's scope beyond pure remote scraping/search and deserve caution.
Install Mechanism
There is no install spec (instruction-only), so nothing will be written to disk by the skill itself. That lowers supply-chain risk, but also means the skill assumes certain local tooling exists without declaring it.
Credentials
The skill requests no environment variables, credentials, or config paths. That is proportional to the stated purpose. Note: the instruction to read product-marketing-context files accesses local files, which is allowed by the skill but not declared in a 'requires.config' field.
Persistence & Privilege
always is false and the skill is user-invocable with normal autonomous invocation allowed. There is no evidence it requests persistent system-wide privileges or attempts to modify other skills or agent configs.
What to consider before installing
This skill's goal (enrich prospect profiles by scraping a site and performing searches) fits its description, but pay attention to two practical risks before installing: 1) Missing declared tooling: The runtime instructions call local node CLIs (node tools/clis/firecrawl.js and node tools/clis/exa.js) but the skill declares no required binaries or install steps. Verify those CLIs actually exist in your agent environment and inspect them before allowing execution. If they are absent, the skill will fail; if they exist, executing them will run whatever code is in your workspace. 2) Local file access and code execution: The skill tells the agent to read .agents/product-marketing-context.md or .claude/product-marketing-context.md if present. That can expose repository-local context. More importantly, invoking local node scripts means the agent could execute arbitrary code located in your project. Only enable this skill in environments where you trust the workspace contents, or modify the instructions to call vetted tools or remote services instead. Other considerations: ensure you are legally permitted to scrape the target sites and that scraped data handling complies with privacy/terms of service. If you want to proceed, ask the skill author to (a) declare required binaries (e.g., node, firecrawl, exa) or provide an install spec, and (b) explicitly document what local files it will read and why. If you cannot verify the local CLIs, treat this as higher-risk and avoid enabling autonomous execution.

Like a lobster shell, security has layers — review code before you run it.

latestvk97fvq0sqex3363g2tksz579fx834rgr
205downloads
0stars
1versions
Updated 9h ago
v1.0.0
MIT-0

Prospect Enrichment

You are an expert at building comprehensive prospect profiles. Your goal is to scrape a prospect's website for primary data, then search for supplementary context to create an enriched profile that gives sales and marketing teams everything they need to engage effectively.

Before Starting

Check for product marketing context first: If .agents/product-marketing-context.md exists (or .claude/product-marketing-context.md in older setups), read it before asking questions. Use that context and only ask for information not already covered or specific to this task.

Understand the situation (ask if not provided):

  1. Which company? -- Company name and website URL
  2. What do you want to learn? -- Specific areas of interest (tech stack, funding, team, pain points), or use defaults to cover all areas
  3. Why are you researching them? -- Outreach, partnership, competitive analysis, deal prep
  4. What do you already know? -- Any existing context about this prospect to avoid redundant research

Work with whatever the user gives you. A company name and URL is enough to start. If they only have a name, search for the URL first.


Workflow

Step 1: Gather Context

Review product-marketing-context if available. Ask the user for the company name and URL if not provided. Clarify research goals or default to a full enrichment profile.

Step 2: Scrape Prospect Site with Firecrawl

Start with the prospect's own website. This is your primary data source -- what the company says about itself.

Scrape the homepage:

node tools/clis/firecrawl.js scrape [prospect-url]

Scrape key pages for deeper context:

node tools/clis/firecrawl.js scrape [prospect-url]/about
node tools/clis/firecrawl.js scrape [prospect-url]/pricing
node tools/clis/firecrawl.js scrape [prospect-url]/team

Try alternate paths if the above return 404:

  • /about-us, /about/team, /leadership, /our-team
  • /plans, /pricing-plans
  • /careers, /jobs (useful for inferring growth and priorities)
  • /customers, /case-studies (useful for understanding their market)

Optionally discover page structure:

node tools/clis/firecrawl.js map [prospect-url]

Use the sitemap to identify pages worth scraping that you might have missed (blog, integrations, docs, changelog).

What to extract from site scraping:

  • Company description and positioning
  • Product/service overview
  • Pricing tiers and model
  • Team members and leadership
  • Customer logos and case studies
  • Technology indicators (job postings, integrations page)
  • Company size signals (office locations, team page)

Step 3: Search for Company Context with Exa

After scraping the site, search for external context that the company doesn't publish on their own site.

Funding and financial context:

node tools/clis/exa.js search "[company name] funding" --num-results 5

Recent news and developments:

node tools/clis/exa.js search "[company name] news 2025 2026" --num-results 5

Technology stack and infrastructure:

node tools/clis/exa.js search "[company name] technology stack" --num-results 5

Reviews and reputation:

node tools/clis/exa.js search "[company name] reviews" --num-results 5

Optional targeted searches based on research goals:

node tools/clis/exa.js search "[company name] hiring engineering" --num-results 5
node tools/clis/exa.js search "[company name] partnerships integrations" --num-results 5
node tools/clis/exa.js search "[company name] CEO interview" --num-results 5

Step 4: Synthesize Enriched Prospect Profile

Combine data from site scraping (primary) and web search (supplementary) into the output format below. Clearly distinguish between confirmed facts (from the company's own site) and inferred information (from external sources).


Output Format

Enriched Prospect Profile: [Company Name]

Company Overview

FieldValue
Company Name[Full legal/brand name]
Website[URL]
One-Line Description[What they do in one sentence]
Founded[Year, if available]
Headquarters[City, State/Country]
Employee Count[Estimate with source: site, LinkedIn, news]
Industry[Primary industry/vertical]

What They Do

2-3 paragraph summary of the company's product/service, target market, and positioning. Based primarily on their own site content.

Tech Stack

CategoryTechnologies
Frontend[Inferred from site, job postings, or search]
Backend[Inferred from job postings, integrations, or search]
Infrastructure[Cloud provider, CDN, etc.]
Key Integrations[From integrations page or search]

Note what is confirmed (from their site/job postings) vs inferred (from external sources).

Team

PersonRoleNotable
[Name][Title][Background, previous companies, relevant experience]

Include: CEO/Founder, CTO, VP Sales/Marketing, and other key leadership. Note team size if available.

Funding

RoundDateAmountInvestors
[Series X][Date][Amount][Lead investor, others]

Total raised: [Amount] Last valuation: [If available, note if confirmed or estimated]

Pain Points

Based on the company's positioning, reviews, job postings, and content themes, these are likely pain points:

  1. [Pain Point] -- [Evidence: where this was inferred from]
  2. [Pain Point] -- [Evidence]
  3. [Pain Point] -- [Evidence]

Mark each as Confirmed (from reviews, direct statements) or Inferred (from positioning, hiring patterns, content themes).

Research Confidence

SectionConfidenceSource
Company OverviewHigh/Medium/Low[Primary source]
Tech StackHigh/Medium/Low[Primary source]
TeamHigh/Medium/Low[Primary source]
FundingHigh/Medium/Low[Primary source]
Pain PointsHigh/Medium/Low[Primary source]

Tips

  • One prospect at a time. This workflow is designed for deep single-company research. For batch prospecting, see exa-lead-generation.
  • Scrape first, search second. The company's own site is the most reliable source. Use external search to fill gaps and add context the company doesn't publish.
  • Mark confirmed vs inferred. Sales teams need to know what's verified and what's a hypothesis. Always note your confidence level.
  • Check the careers page. Job postings reveal tech stack, growth priorities, team gaps, and organizational challenges better than almost any other source.
  • Look at the pricing page carefully. Pricing model and tiers reveal target market, average deal size, and competitive positioning.
  • Don't over-scrape. 4-6 key pages plus targeted Exa searches is usually enough. Diminishing returns set in quickly.

Related Skills

  • cold-email -- Use enriched profiles to write personalized outreach
  • exa-company-research -- Raw company search when you don't need full enrichment
  • exa-lead-generation -- Build lead lists across multiple companies
  • firecrawl-cli -- Raw site scraping commands and options
  • competitive-intelligence -- Multi-competitor broad analysis (vs single-prospect deep dive)

Comments

Loading comments...