Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

SEO Audit Bot

v1.0.0

Perform a comprehensive SEO audit of any website. Analyzes technical SEO, on-page factors, content quality, performance, and generates an actionable report w...

0· 97·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for eyensama/seo-audit-bot.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "SEO Audit Bot" (eyensama/seo-audit-bot) from ClawHub.
Skill page: https://clawhub.ai/eyensama/seo-audit-bot
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install seo-audit-bot

ClawHub CLI

Package manager switcher

npx clawhub@latest install seo-audit-bot
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The skill's name, description, SKILL.md, README, and scripts all align with an SEO auditing purpose. However, the package metadata claims no required binaries while README and scripts clearly rely on web_fetch/exec and on-system tools (curl, grep, sed, wc). This is a minor incoherence: the tool genuinely needs HTTP-fetching and basic Unix text utilities, but they are not declared in the registry metadata.
Instruction Scope
SKILL.md instructs fetching the target URL, robots.txt, sitemap and analyzing HTML — all appropriate for SEO. The included script performs these fetches and writes temporary files to /tmp. A security-relevant behavior: the skill will fetch arbitrary URLs supplied by the user (including intranet/private IPs), which is expected for this purpose but introduces SSRF-like risks if run in an environment with access to internal services. The instructions do not attempt to read unrelated local files or exfiltrate data to external endpoints.
Install Mechanism
There is no install spec (instruction-only with an included helper script). Nothing downloads or extracts remote archives; the code consists of plain files and a shell script. No high-risk install mechanisms are present.
Credentials
The skill declares no environment variables or credentials and does not request broad secrets. The runtime behavior uses network fetches only, which is proportionate to the stated purpose. The only resource access is writing temporary files under /tmp for analysis (normal for a shell helper).
Persistence & Privilege
The skill is not marked always:true and does not request permanent agent-wide privileges. It does not modify other skills or system-wide configurations. Autonomous invocation is allowed by default but not combined with other concerning factors.
Assessment
This skill appears to do what it says: fetch a URL and analyze HTML for SEO signals. Before installing or running it, note: (1) the README/script expect HTTP fetch capabilities (web_fetch or curl) and typical Unix text tools (curl, grep, sed, wc) even though the registry metadata doesn't list them — ensure your runtime provides them; (2) the skill will fetch any user-supplied URL, so in environments that can reach internal network services this can be used to access intranet endpoints (SSRF risk) — restrict allowed domains or run in an isolated environment if that matters; (3) review the included scripts (scripts/audit.sh) before execution — they write temporary files to /tmp and use standard command-line parsing (no obfuscation), which appears benign. If you need stronger guarantees, run the skill in a sandboxed agent or review/modify the script to enforce allowed hostnames.

Like a lobster shell, security has layers — review code before you run it.

latestvk978eg3ymg16z1wy9rzft0sbb983r16g
97downloads
0stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

SEO Audit Bot

A comprehensive SEO auditing skill that analyzes any website and produces a detailed, actionable report.

What It Does

When a user provides a URL, this skill performs a full SEO audit covering:

  1. Technical SEO — robots.txt, sitemap, HTTPS, mobile-friendliness, page speed signals
  2. On-Page SEO — title tags, meta descriptions, headings, URL structure, internal linking
  3. Content Analysis — word count, keyword density, readability, duplicate content signals
  4. Performance — page load indicators, Core Web Vitals signals
  5. Social & Sharing — Open Graph tags, Twitter Cards, structured data
  6. Competitor Comparison (optional) — compare against a competitor URL

How to Use

Basic Audit

User says: "Audit the SEO of https://example.com"

Agent should:

  1. Fetch the target URL using web_fetch
  2. Fetch the robots.txt (/robots.txt)
  3. Fetch the sitemap (/sitemap.xml or from robots.txt)
  4. Analyze the HTML content for all SEO factors
  5. Generate a structured report

Competitor Comparison

User says: "Compare SEO of https://example.com vs https://competitor.com"

Agent should:

  1. Audit both URLs
  2. Generate a side-by-side comparison
  3. Highlight advantages and gaps

Audit Process

Step 1: Fetch the Page

web_fetch(url="<target_url>", maxChars=50000, extractMode="text")

Step 2: Check Technical Signals

  • Fetch robots.txt → check if exists, what's blocked
  • Fetch sitemap.xml → check if exists, last modified
  • Check HTTPS redirect
  • Check canonical tag presence

Step 3: Analyze On-Page Elements

Extract and evaluate:

  • <title> — length (50-60 chars ideal), keyword presence
  • <meta name="description"> — length (150-160 chars ideal), keyword presence
  • <h1> — single h1, contains primary keyword
  • <h2><h6> — proper hierarchy
  • URL structure — short, descriptive, keyword-rich
  • Image alt tags — descriptive, not keyword-stuffed
  • Internal links — count, quality, anchor text
  • External links — count, quality, nofollow usage

Step 4: Content Analysis

  • Word count (minimum 300 for pages, 1000+ for blog posts)
  • Keyword density (1-3% for primary keyword)
  • Heading structure (logical hierarchy)
  • Readability (sentence length, paragraph size)
  • Duplicate content risk

Step 5: Performance Indicators

  • Check for <meta name="viewport"> (mobile-friendly)
  • Check for lazy loading on images
  • Check for minified CSS/JS references
  • Check for CDN usage
  • Check for excessive inline styles

Step 6: Social & Schema

  • Open Graph tags (og:title, og:description, og:image)
  • Twitter Card tags
  • JSON-LD structured data
  • Schema.org markup

Step 7: Generate Report

Report Format

# SEO Audit Report: [URL]
Date: [date]

## Overall Score: XX/100

### 🔧 Technical SEO: XX/100
- ✅ HTTPS enabled
- ✅ robots.txt found
- ❌ No sitemap.xml found
- ✅ Mobile viewport configured
- ⚠️ Missing canonical tag

Recommendations:
1. Create and submit a sitemap.xml
2. Add canonical tags to prevent duplicate content

### 📄 On-Page SEO: XX/100
- ✅ Title tag (52 chars) — Good
- ⚠️ Meta description (180 chars) — Too long, aim for 150-160
- ❌ No H1 tag found
- ✅ URL structure is clean
- ⚠️ 3 images missing alt tags

Recommendations:
1. Add a clear H1 tag with primary keyword
2. Shorten meta description to 150-160 characters
3. Add alt tags to all images

### 📝 Content: XX/100
- Word count: 450 words
- Primary keyword density: 1.2%
- Heading structure: Proper H2/H3 hierarchy
- Readability: Good (avg 15 words/sentence)

Recommendations:
1. Expand content to 800+ words for better ranking potential

### ⚡ Performance: XX/100
- Viewport meta: ✅
- Lazy loading: ⚠️ Partial
- Minified assets: ✅
- CDN: ❌ Not detected

### 📱 Social & Schema: XX/100
- Open Graph: ✅ Complete
- Twitter Cards: ⚠️ Missing
- JSON-LD: ❌ Not found

Recommendations:
1. Add Twitter Card meta tags
2. Implement JSON-LD structured data for rich snippets

## 🎯 Priority Actions (Do These First)
1. [HIGH] Add H1 tag with primary keyword
2. [HIGH] Create sitemap.xml
3. [MEDIUM] Implement JSON-LD structured data
4. [LOW] Add Twitter Card tags

Scoring Rubric

Each section is scored 0-100:

Technical SEO (25% weight)

  • HTTPS: 15 points
  • robots.txt: 10 points
  • sitemap.xml: 15 points
  • Mobile viewport: 15 points
  • Canonical tag: 10 points
  • Clean URL structure: 10 points
  • Page speed indicators: 15 points
  • No broken links: 10 points

On-Page SEO (30% weight)

  • Title tag (exists, length, keyword): 20 points
  • Meta description (exists, length, keyword): 20 points
  • H1 tag (exists, unique, keyword): 20 points
  • Heading hierarchy: 10 points
  • Image alt tags: 15 points
  • Internal linking: 15 points

Content (25% weight)

  • Word count: 25 points
  • Keyword presence & density: 25 points
  • Readability: 25 points
  • Content structure: 25 points

Performance (10% weight)

  • Mobile-friendly: 30 points
  • Asset optimization: 30 points
  • Loading indicators: 40 points

Social & Schema (10% weight)

  • Open Graph: 40 points
  • Twitter Cards: 30 points
  • Structured data: 30 points

Overall Score = weighted average of all sections.

Tips for the Agent

  1. Be specific — don't just say "improve SEO", say exactly what to change
  2. Prioritize — label recommendations as HIGH/MEDIUM/LOW
  3. Show before/after — when suggesting changes, show the current state and the ideal state
  4. Be honest about limitations — you can't check page speed directly, only indicators
  5. Offer follow-up — suggest re-audit after changes are made

Comments

Loading comments...