Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

LinkedIn Company Scout

v1.0.1

Collect company intelligence for sourcing or research by automating Google Chrome against LinkedIn, company websites, and Google Maps. Use when Codex needs t...

0· 86·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for 549800894/linkedin-company-scout-m1.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "LinkedIn Company Scout" (549800894/linkedin-company-scout-m1) from ClawHub.
Skill page: https://clawhub.ai/549800894/linkedin-company-scout-m1
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install linkedin-company-scout-m1

ClawHub CLI

Package manager switcher

npx clawhub@latest install linkedin-company-scout-m1
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The skill's stated purpose (collect company profiles and enrich emails) matches the included scraping code. However the full-pipeline script invokes external user-local scripts (EMAIL_PUSH_SCRIPT and DASHBOARD_REFRESH_SCRIPT) located under /Users/m1/Documents/Playground — running the full pipeline will execute code outside the skill bundle. The skill metadata declares no credentials or external services, yet the pipeline clearly includes an email-sending phase and references external project scripts, which is disproportionate and environment-specific.
!
Instruction Scope
SKILL.md and the scripts instruct the agent to: attach Selenium to the user's real Chrome session (opening a remote debugging port), reuse/create a persistent Chrome profile under the user's home, enable OpenClaw heartbeat via the 'openclaw' CLI, crawl LinkedIn, visit external websites and Google Maps, and optionally run an email campaign and refresh a dashboard. The instructions explicitly call out absolute local paths and an external email push script, which expands scope beyond simple scraping and grants the skill the ability to execute arbitrary local scripts and send emails.
Install Mechanism
There is no declared install spec (instruction-only + included scripts). At runtime the scripts use webdriver_manager.chrome to download and install ChromeDriver if needed — an implicit network download. While webdriver_manager is common for Selenium, this implicit fetch is not declared in the metadata and will write to disk. No other external downloads were found in the bundle.
!
Credentials
The registry metadata declares no required environment variables or credentials, but the full pipeline expects an SMTP password (passed as --smtp-password) to run the email phase. The skill also uses the 'openclaw' CLI if present and creates/uses local files (Chrome profile dir, sqlite DB). Asking for an SMTP password at runtime (and passing it on the command line) is sensitive and not represented in the manifest, which is an inconsistency and a risk for secret exposure.
Persistence & Privilege
always:false (normal). The scripts create a persistent Chrome profile under the user's home (~/.linkedin-company-scout/chrome-profile) and write a sqlite DB and output files in user-specified locations. They do not appear to modify other skills or system-wide configs. Creating a persistent browser profile and opening a remote-debugging port are elevated actions the user should be aware of, but are coherent with the skill's automation goal.
What to consider before installing
Before running or installing this skill consider the following: - The 'full pipeline' will execute external scripts outside the skill folder (e.g., /Users/m1/Documents/Playground/email-ops/push_design_services_campaign.py). Inspect those scripts first — they will run with your user privileges. - The skill does not declare any required credentials but the email phase expects an SMTP password passed on the command line. Passing secrets via CLI is insecure (shell history exposure). Only provide credentials you are willing to expose, or use a throwaway/test account. - The scraping code attaches to your real Google Chrome via a remote debugging port and creates a persistent profile under your home directory. This opens a local TCP port and modifies browser profile data — run only if you trust the code and understand the implications. - The scripts use webdriver_manager to download ChromeDriver automatically at runtime (network fetch + disk write). If your environment restricts downloads, be aware the driver may be fetched on first run. - If you want only data collection (no email sending or dashboard refresh), run run_linkedin_company_scout.py by itself and avoid the run_full_pipeline.py command. Consider using --no-heartbeat and run with test keywords and a restricted output directory first. Recommended precautions: - Inspect the external EMAIL_PUSH_SCRIPT and dashboard scripts referenced by run_full_pipeline.py before executing the pipeline. - Run the skill in an isolated account, VM, or sandbox if possible. - Never provide your primary SMTP credentials; use a test mailbox and the --recipient-override flag to confirm behavior before any real sends. - If you cannot review external scripts, do not run the full pipeline; limit execution to the contained scout script and review its outputs manually. Given these inconsistencies between the skill manifest and its runtime behavior, proceed carefully — the skill is coherent for scraping, but the extra implicit behaviors make it suspicious until audited.

Like a lobster shell, security has layers — review code before you run it.

latestvk97dx5v2wa0n6em710129s0ee984cqhz
86downloads
0stars
2versions
Updated 2w ago
v1.0.1
MIT-0

LinkedIn Company Scout

Collect company profiles in a repeatable way when the user wants lead discovery or market scanning from LinkedIn first, then enrichment from official websites and Google Maps.

Read references/output-schema.md before collecting. Read references/heartbeat-and-browser.md before opening the browser or starting a long run.

Run run_linkedin_company_scout.py when the task is a straight collection run and the environment is macOS with Google Chrome plus Selenium available.

Run run_full_pipeline.py when the user wants the full automation chain:

  • data mining
  • email sending (skip already successful sends)
  • dashboard data refresh

Workflow

  1. Normalize the request.
  2. Prepare heartbeat and browser session.
  3. Collect candidate companies from LinkedIn.
  4. Enrich each company from its official website.
  5. Fallback to Google Maps for email only when the official site does not expose one.
  6. Produce a structured result with source attribution and gap notes.

Normalize The Request

  • Use the user's keywords exactly unless they ask for expansion.
  • Treat each keyword as an independent collection bucket.
  • Default target count to 5 companies per keyword unless the user specifies another number.
  • Exclude companies whose operating location is in China, whose LinkedIn location is China, or whose website clearly shows China-only presence.
  • Prefer one row per unique company. Deduplicate across keywords, but keep the matched keyword on each retained row.
  • If one company matches multiple keywords, keep separate output rows only when the user explicitly wants overlap preserved. Otherwise keep the strongest keyword match and note the dropped keyword matches.

Prepare Heartbeat And Browser Session

  • Use the normal installed Google Chrome application, not an embedded browser and not a freshly created separate automation window.
  • Prefer the current Chrome window after attachment. The bundled script launches Chrome with a remote debugging port only when needed, then continues work in tabs under that real Chrome session.
  • The bundled script stops only the automation driver on exit. It does not intentionally close the user's Chrome session.
  • Before starting collection, enable OpenClaw heartbeat:
openclaw system heartbeat enable
  • If heartbeat commands fail, continue the collection task but mention that monitoring could not be enabled.
  • If LinkedIn is not already logged in inside the automation profile, allow a manual login pause, then resume the scripted run.

Run The Script

Use this command for the common case:

python3 /Users/m1/.codex/skills/linkedin-company-scout/scripts/run_linkedin_company_scout.py \
  --keywords "industrial design,hardware design,smart wearable" \
  --count 5 \
  --output-dir /Users/m1/Documents/Playground/linkedin-company-scout-output

Use this command when the user wants deep collection such as 100 companies per keyword with pagination:

python3 /Users/m1/.codex/skills/linkedin-company-scout/scripts/run_linkedin_company_scout.py \
  --keywords "industrial design,hardware design,smart wearable" \
  --count 100 \
  --max-search-pages 20 \
  --output-dir /Users/m1/Documents/Playground/linkedin-company-scout-output-100

Useful flags:

  • --no-heartbeat: skip OpenClaw heartbeat enablement
  • --chrome-profile-dir <path>: keep a dedicated Chrome profile with a persistent LinkedIn login
  • --debug-port <port>: change the Chrome debugging port if 9222 is occupied
  • --linkedin-wait-seconds <n>: allow more time for manual login
  • --max-search-pages <n>: scan additional LinkedIn result pages when one page is not enough

Full Pipeline (Mining + Email + Dashboard)

Use this one command when the user asks for complete execution flow:

python3 /Users/m1/.codex/skills/linkedin-company-scout/scripts/run_full_pipeline.py \
  --keywords "industrial design" \
  --count 200 \
  --output-dir /Users/m1/Documents/Playground/linkedin-company-scout-output-industrial-200-verified \
  --db-path /Users/m1/Documents/Playground/linkedin-company-scout-output-industrial-200-verified/results.db \
  --max-search-pages 400 \
  --no-heartbeat \
  --send-email \
  --send-backend imap-smtp-email \
  --smtp-password "<SMTP_PASSWORD>" \
  --refresh-dashboard

Key behavior:

  • Email phase defaults to not passing --allow-resend, so records with status='sent' in prior campaigns are skipped.
  • Email template is persisted as 通用模版 in template DB.
  • Push logs include send timestamp + content snapshot in campaign DB and source DB history table.
  • Dashboard refresh regenerates linkedin-dashboard/dashboard-data.js.

Expected outputs:

  • linkedin_company_scout_results.json
  • linkedin_company_scout_results.csv
  • run_metadata.json

Collect From LinkedIn

  • Search LinkedIn for companies related to the current keyword.
  • Prefer actual company pages over posts, people, jobs, schools, or groups.
  • Accept a company only after confirming all required profile fields can be filled or marked as unavailable with a reason.
  • Capture these fields from LinkedIn whenever available:
    • company name
    • company website
    • company summary or about text
    • industry
    • location
    • LinkedIn URL
    • matched keyword
  • Keep the company only if the location is outside China.
  • If LinkedIn does not expose the website but the company page clearly references an official domain elsewhere, use the official domain and note that the website came from a nonstandard LinkedIn surface.

Enrich From Official Website

  • Visit the official website for each accepted company.
  • Search obvious contact surfaces first: Contact, About, Footer, Legal, Imprint, Support, Team.
  • Capture one or more email addresses when present.
  • If multiple emails exist, prefer the most general business contact such as hello@, info@, contact@, support@, or a departmental email clearly relevant to external inquiries.
  • Record the email source as official_website.
  • If no email is visible, note the checked page or section so the absence is auditable.

Fallback To Google Maps For Email

  • Only use Google Maps when the official website did not yield any email address.
  • Search by exact company name plus location to reduce mismatches.
  • Confirm the listing is the same business before taking contact details from it.
  • Capture an email only when it is explicitly displayed on the listing or clearly reachable through the listing's surfaced contact details.
  • Record the email source as google_maps.
  • If Google Maps also has no usable email, leave the email field blank and set the email source to not_found.

Output Rules

  • Return results in a structured table or JSON using the schema in references/output-schema.md.
  • Every accepted row must include:
    • keyword
    • company_name
    • company_website
    • company_intro
    • industry
    • location
    • linkedin_url
    • email
    • email_source
  • Do not omit required fields. If a field cannot be found, leave it empty and add a brief note in notes.
  • Maintain exactly 5 accepted companies per keyword when possible. If fewer than 5 valid non-China companies are found, report the shortfall clearly.
  • Keep source attribution concise and explicit. Valid values are official_website, google_maps, or not_found.

Quality Bar

  • Prefer companies with enough public information to fill the profile cleanly.
  • Avoid agencies, stealth entities, duplicate subsidiaries, and irrelevant firms unless the keyword space is sparse.
  • Do not infer an email address from a pattern such as firstname@domain.com unless the exact address is shown publicly.
  • Do not fabricate industries, locations, or descriptions. Use the public wording when available, otherwise summarize faithfully.
  • When summarizing a company intro, keep it short and factual.
  • Respect site friction. If LinkedIn presents a login wall, checkpoint, or anti-bot screen, slow down and continue only after the user session is valid.

Deliverable Format

  • Provide one result block per keyword followed by a combined summary.
  • For each keyword, state:
    • accepted company count
    • rejected or skipped count if material
    • any shortfall against the target of 5
  • Include a final note on whether OpenClaw heartbeat monitoring was enabled successfully.

Comments

Loading comments...