Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

SEO Content Engine

v1.0.0

Research competitors, analyze top-ranking content, and generate a fully SEO-optimized 2000+ word blog post with headings, FAQ, meta description, and internal...

0· 114·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for dreamsarts/openclaw-seo-content-engine.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "SEO Content Engine" (dreamsarts/openclaw-seo-content-engine) from ClawHub.
Skill page: https://clawhub.ai/dreamsarts/openclaw-seo-content-engine
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install openclaw-seo-content-engine

ClawHub CLI

Package manager switcher

npx clawhub@latest install openclaw-seo-content-engine
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The code and SKILL.md implement SEO research (SERP scraping, PAA extraction, competitor heading analysis) and generation via Gemini, which matches the skill's stated purpose. However the skill's registry metadata declared no required env vars or credentials while both SKILL.md and engine.py require a GEMINI_API_KEY and a running Chrome with remote debugging. That registry omission is an inconsistency.
!
Instruction Scope
Runtime instructions and the script perform web scraping of Google and visit competitor pages (expected for research). But SKILL.md and engine.py point to a specific, hard-coded dotenv file path (/Users/edwin/.openclaw/workspace/dreams-arts/.env). engine.py calls load_dotenv on that path, which will load any environment variables contained there — not just GEMINI_API_KEY — and this is surprising and broad in scope. The script also connects to a local Chrome via CDP (localhost:9222), which exposes the full browser session to the tool; that can include cookies and logged-in sessions beyond what is necessary to fetch SERP results.
Install Mechanism
No install spec included; SKILL.md asks for standard Python packages (google-generativeai, playwright) and Chrome with remote debugging. These dependencies are proportionate to scraping + using Gemini. There are no external download URLs or archive extraction steps in the skill bundle.
!
Credentials
The skill requires GEMINI_API_KEY (used to configure google.generativeai) but the registry metadata does not list any required env vars — a discrepancy. More importantly, engine.py explicitly loads a hard-coded .env file from a specific user path, which could contain other secrets; even though the script only references GEMINI_API_KEY, loading that file has side effects (it populates the process environment) and is disproportionate and surprising. Requiring Chrome CDP access also broadens required privileges (access to browser session state).
Persistence & Privilege
The skill does not request persistent installation flags (always: false) and does not appear to modify other skills or system-wide configurations. It runs on-demand and needs no special platform privileges beyond network and local browser access.
What to consider before installing
This skill mostly does what its description says (scrape competitors and call Gemini to generate copy), but there are three things to consider before installing or running it: 1) GEMINI_API_KEY requirement: The code requires a Gemini API key but the registry metadata omitted any required env vars. Confirm where you should provide the API key and avoid placing other secrets in the same .env file. Prefer passing only the GEMINI_API_KEY via a secure, explicit mechanism rather than relying on a hard-coded file path. 2) Hard-coded .env path: engine.py loads /Users/edwin/.openclaw/workspace/dreams-arts/.env. That is a user-specific path and will pull any variables from that file into the process environment. Either change the code to accept a configurable path or ensure that file contains no secrets you don’t want the script to access. 3) Local Chrome CDP exposure: The script connects to Chrome on localhost:9222 to reuse an active Google session for scraping. This gives the script access to your browser context (open tabs, cookies, session state). Only run this in an environment where you consent to that access — ideally in a disposable or isolated profile/browser instance with no sensitive accounts logged in. Additional suggestions: review the rest of the script (generation calls truncated in provided file) to confirm it does not transmit scraped content or cookies to any unexpected remote endpoints beyond the Gemini API. If you need lower risk, run the research step separately in a controlled environment (or use skip-research) and keep the generation step limited to supplying only the minimal required inputs (keyword and a dedicated API key). If the author can update the skill to remove the hard-coded .env path and declare GEMINI_API_KEY in metadata, my concerns would be reduced.

Like a lobster shell, security has layers — review code before you run it.

latestvk977km9h704we4jpcnfr1gr86584hasn
114downloads
0stars
1versions
Updated 2w ago
v1.0.0
MIT-0

SEO Content Engine

Purpose

End-to-end SEO content generation. Takes a target keyword, researches the top competing articles via web search, analyzes their structure and topics, then generates a fully optimized 2000+ word blog post in Markdown — ready to publish.

Requirements

  • Python 3.10+
  • google-generativeai package (pip install google-generativeai)
  • Playwright (pip install playwright) for competitor research
  • Chrome running with remote debugging on port 9222
  • GEMINI_API_KEY in /Users/edwin/.openclaw/workspace/dreams-arts/.env

Usage

From Command Line

python engine.py "best custom printing services near me"

Optional Flags

--tone "professional, authoritative"   # Writing style (default: "informative, engaging")
--word-count 3000                      # Target word count (default: 2000)
--brand "Dream's Arts Evolution"       # Brand to weave in naturally
--location "Caguas, PR"               # Local SEO focus
--output article.md                    # Save to file (default: stdout)
--skip-research                        # Skip web scraping, use keyword only

From Python

from engine import SEOContentEngine

engine = SEOContentEngine()
article = await engine.generate(
    keyword="custom t-shirt printing Puerto Rico",
    tone="professional",
    brand="Dream's Arts Evolution",
    location="Caguas, PR",
    word_count=2500
)

Output Format

The script outputs a complete Markdown article with YAML frontmatter:

---
title: "Custom T-Shirt Printing in Puerto Rico: The Ultimate 2026 Guide"
meta_description: "Looking for custom t-shirt printing in Puerto Rico? Compare top services, prices, and turnaround times. Free quotes from local shops in Caguas, San Juan & more."
target_keyword: "custom t-shirt printing Puerto Rico"
secondary_keywords: ["screen printing PR", "custom apparel Caguas"]
word_count: 2450
reading_time: "10 min"
internal_links_suggested:
  - anchor: "our custom printing services"
    target: "/services/custom-printing"
  - anchor: "request a free quote"
    target: "/contact"
---

# Custom T-Shirt Printing in Puerto Rico: The Ultimate 2026 Guide

## Introduction
...

## What to Look for in a Custom Printing Service
### Quality of Materials
...
### Turnaround Time
...

## Top Custom Printing Methods Compared
...

## Frequently Asked Questions

### How much does custom t-shirt printing cost in Puerto Rico?
...

### What is the minimum order for custom printing?
...

How Claude Should Use This Skill

  1. Identify the keyword: Extract the target keyword or topic from the user's request.
  2. Run research phase: Execute python engine.py "keyword" — this scrapes Google results and analyzes competitors.
  3. Review the output: Check that the article is coherent, accurate, and properly optimized.
  4. Customize if needed: Add brand-specific details, local references, or adjust tone.
  5. Publish: Copy to CMS, blog platform, or save as file.

SEO Optimization Checklist (Built-in)

The engine automatically ensures:

  • Target keyword in title (H1), first paragraph, and 2-3 H2 headings
  • Keyword density between 1-2% (natural, not stuffed)
  • LSI (Latent Semantic Indexing) keywords woven throughout
  • Meta description under 160 characters with keyword and CTA
  • H2/H3 heading hierarchy (no skipping levels)
  • FAQ section based on "People Also Ask" data
  • Internal linking suggestions with anchor text
  • Readability: short paragraphs (2-4 sentences), bullet lists, bold key phrases
  • Word count 2000+ (configurable)

Competitor Research Phase

When --skip-research is NOT set, the engine:

  1. Searches Google for the target keyword
  2. Extracts the top 10 organic results (titles, URLs, snippets)
  3. Visits the top 5 articles and extracts their heading structure
  4. Identifies content gaps and unique angles
  5. Feeds all this context to Gemini for article generation

Notes

  • The research phase requires Chrome on port 9222 with active Google session.
  • If research fails (blocked, timeout), falls back to keyword-only generation.
  • Articles are generated in English by default; add --language es for Spanish.
  • Never plagiarizes — all content is original, informed by competitor analysis.

Comments

Loading comments...