Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

SEO Content Engine

v1.0.0

Research competitors, analyze top-ranking content, and generate a fully SEO-optimized 2000+ word blog post with headings, FAQ, meta description, and internal...

0· 81·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The code and SKILL.md implement SEO research (SERP scraping, PAA extraction, competitor heading analysis) and generation via Gemini, which matches the skill's stated purpose. However the skill's registry metadata declared no required env vars or credentials while both SKILL.md and engine.py require a GEMINI_API_KEY and a running Chrome with remote debugging. That registry omission is an inconsistency.
!
Instruction Scope
Runtime instructions and the script perform web scraping of Google and visit competitor pages (expected for research). But SKILL.md and engine.py point to a specific, hard-coded dotenv file path (/Users/edwin/.openclaw/workspace/dreams-arts/.env). engine.py calls load_dotenv on that path, which will load any environment variables contained there — not just GEMINI_API_KEY — and this is surprising and broad in scope. The script also connects to a local Chrome via CDP (localhost:9222), which exposes the full browser session to the tool; that can include cookies and logged-in sessions beyond what is necessary to fetch SERP results.
Install Mechanism
No install spec included; SKILL.md asks for standard Python packages (google-generativeai, playwright) and Chrome with remote debugging. These dependencies are proportionate to scraping + using Gemini. There are no external download URLs or archive extraction steps in the skill bundle.
!
Credentials
The skill requires GEMINI_API_KEY (used to configure google.generativeai) but the registry metadata does not list any required env vars — a discrepancy. More importantly, engine.py explicitly loads a hard-coded .env file from a specific user path, which could contain other secrets; even though the script only references GEMINI_API_KEY, loading that file has side effects (it populates the process environment) and is disproportionate and surprising. Requiring Chrome CDP access also broadens required privileges (access to browser session state).
Persistence & Privilege
The skill does not request persistent installation flags (always: false) and does not appear to modify other skills or system-wide configurations. It runs on-demand and needs no special platform privileges beyond network and local browser access.
What to consider before installing
This skill mostly does what its description says (scrape competitors and call Gemini to generate copy), but there are three things to consider before installing or running it: 1) GEMINI_API_KEY requirement: The code requires a Gemini API key but the registry metadata omitted any required env vars. Confirm where you should provide the API key and avoid placing other secrets in the same .env file. Prefer passing only the GEMINI_API_KEY via a secure, explicit mechanism rather than relying on a hard-coded file path. 2) Hard-coded .env path: engine.py loads /Users/edwin/.openclaw/workspace/dreams-arts/.env. That is a user-specific path and will pull any variables from that file into the process environment. Either change the code to accept a configurable path or ensure that file contains no secrets you don’t want the script to access. 3) Local Chrome CDP exposure: The script connects to Chrome on localhost:9222 to reuse an active Google session for scraping. This gives the script access to your browser context (open tabs, cookies, session state). Only run this in an environment where you consent to that access — ideally in a disposable or isolated profile/browser instance with no sensitive accounts logged in. Additional suggestions: review the rest of the script (generation calls truncated in provided file) to confirm it does not transmit scraped content or cookies to any unexpected remote endpoints beyond the Gemini API. If you need lower risk, run the research step separately in a controlled environment (or use skip-research) and keep the generation step limited to supplying only the minimal required inputs (keyword and a dedicated API key). If the author can update the skill to remove the hard-coded .env path and declare GEMINI_API_KEY in metadata, my concerns would be reduced.

Like a lobster shell, security has layers — review code before you run it.

latestvk977km9h704we4jpcnfr1gr86584hasn

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments