Back to skill
Skillv1.0.1

ClawScan security

Job Lead Radar · ClawHub's context-aware review of the artifact, metadata, and declared behavior.

Scanner verdict

BenignApr 7, 2026, 7:55 PM
Verdict
benign
Confidence
high
Model
gpt-5-mini
Summary
The skill's code and instructions align with its stated purpose (scraping public job boards and saving results locally); it requests no credentials, doesn't phone home to unexpected endpoints, and contains no obvious exfiltration or privilege-escalation behavior.
Guidance
This skill appears to be what it claims: a local scraper that writes job_leads.json. Before installing or running it: (1) review the pip package 'scrapling' (source, recent releases) before pip installing to avoid pulling a malicious package; (2) consider running the script in an isolated environment (virtualenv or sandbox) and inspect network activity if concerned; (3) be aware that scraping sites like Indeed, ZipRecruiter, and LinkedIn may trigger anti-bot protections and may violate those sites' terms of service — use responsibly; (4) the skill writes files to ~/.openclaw/skills/job-lead-radar and logs via cron examples — verify cron entries and log retention meet your privacy policies. If you want higher assurance, request the upstream repository or maintainer information (homepage/source) and verify the scrapling dependency source.

Review Dimensions

Purpose & Capability
okThe name/description (job lead scraping) matches the included script and SKILL.md. The script fetches public job board pages and extracts titles/companies/links; no unrelated services, credentials, or system resources are requested.
Instruction Scope
okSKILL.md instructs running the provided Python script and optionally scheduling it via cron; outputs are written to a local job_leads.json in the skill directory. The instructions do not ask the agent to read arbitrary user files, environment variables, or transmit data to endpoints other than the job boards.
Install Mechanism
noteThere is no install spec (instruction-only). The script depends on a third-party pip package ('scrapling'); the installer is left to the user (pip install scrapling). This is common but you should vet the scrapling package (source, trustworthiness) before installing.
Credentials
okNo environment variables, credentials, or config paths are requested. The script does not attempt to read secrets or other system config. This is proportionate to a scraper that targets public pages.
Persistence & Privilege
okalways is false, skill is user-invocable, and it does not modify agent/system-wide settings. It only writes its own job_leads.json file in its script directory.