Back to skill
Skillv1.0.0
ClawScan security
Cold Email Generator · ClawHub's context-aware review of the artifact, metadata, and declared behavior.
Scanner verdict
BenignMar 5, 2026, 8:57 PM
- Verdict
- benign
- Confidence
- high
- Model
- gpt-5-mini
- Summary
- The skill's code and instructions are consistent with its stated purpose (scrape sites, run a local Ollama model, and generate/save cold emails); there are no unexpected external endpoints, credential requests, or install steps, but review the local scrape script and output directory before use.
- Guidance
- This skill appears to do what it says: it scrapes websites, calls your local Ollama model, and writes emails to ~/StudioBrain/30_INTERNAL/WLC-Services/OUTREACH. Before installing/running: (1) confirm ollama and the llama3.2 model are installed locally and you accept any license/usage implications; (2) inspect the scrape script at /Users/wlc-studio/StudioBrain/00_SYSTEM/skills/scrapling/scrape.py to ensure it is trusted (it will be executed for each URL); (3) check MASTER_LEAD_LIST.md for any sensitive data you don't want processed; (4) be aware generated emails include scraped site text — ensure compliance with privacy / terms of the target sites and with your outreach policies; and (5) note the script writes files in your home directory and uses a hardcoded path for the scrape script, so adapt paths if your environment differs.
Review Dimensions
- Purpose & Capability
- okName/description match the implementation: generator.py scrapes business websites, sends scraped text to a local Ollama model (llama3.2), and produces short cold emails. The declared dependencies in SKILL.md (a local scrape script and Ollama) are the resources the code actually calls. The only minor mismatch is a hardcoded absolute path for the scrape script (SCRAPE_SCRIPT = "/Users/wlc-studio/StudioBrain/00_SYSTEM/skills/scrapling/scrape.py") vs. the SKILL.md's use of a tilde path; this is a brittleness but not malicious.
- Instruction Scope
- noteSKILL.md and the script instruct scraping target websites, reading a local leads file (MASTER_LEAD_LIST.md), and saving outputs to ~/StudioBrain/30_INTERNAL/... — all of which the code does. These actions are within the stated scope, but they entail scraping arbitrary external sites and reading/writing files in your home directory, so verify the leads file and the scrape script are trusted before running.
- Install Mechanism
- okThere is no install spec; this is instruction+script only. The script uses subprocess to call an external local script and the local Ollama binary. No network downloads or archive extraction are performed by the skill itself. The highest risk is a dependency on locally present binaries/scripts (scrape.py and ollama).
- Credentials
- okThe skill requests no environment variables or credentials. It does read a local leads file and writes to a local OUTREACH directory in the user's home; this is consistent with an outreach tool. There are no hidden credential accesses. Confirm that the hardcoded SCRAPE_SCRIPT path points to an expected/trusted script on your system.
- Persistence & Privilege
- okThe skill is not always-enabled and does not request elevated persistent privileges. It creates and writes files under the user's ~/StudioBrain directory (its own data), which is normal for this kind of tool. Autonomous model invocation is allowed (default) but does not by itself indicate risk.
