PLS SEO Audit
Scan websites and content to identify SEO gaps, analyze meta tags, technical factors, keyword use, and provide competitor comparison insights.
MIT-0 · Free to use, modify, and redistribute. No attribution required.
⭐ 0 · 1.1k · 3 current installs · 4 all-time installs
byMatt Valenta@mattvalenta
MIT-0
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
The skill's name/description (SEO audit) aligns with the actions in SKILL.md: fetching pages, checking meta tags, running PageSpeed/Lighthouse, SSL checks, and content analysis. However, the instructions expect tools and libraries (curl, openssl, xmllint, npx/lighthouse, Python packages like requests, bs4, textstat) and API credentials (Google API key / Bearer token placeholders) while the registry metadata declares no required binaries, env vars, or dependencies. That mismatch is unexpected and reduces coherence.
Instruction Scope
SKILL.md tells the agent to make network requests to target sites and to Google APIs, run local CLI tools (openssl, xmllint, npx lighthouse), and execute Python snippets that use third-party libraries. It includes placeholders for YOUR_API_KEY and YOUR_TOKEN but gives no guidance on where those should come from. While the listed actions are appropriate for an SEO audit, the instructions rely on running external commands and executing code without describing how dependencies or credentials are provided — this could lead the agent to run unfamiliar tooling (e.g., npx) or attempt to fetch/install packages at runtime.
Install Mechanism
There is no install spec (instruction-only), which is low risk in itself. But because SKILL.md expects specific CLI tools and Python libraries, the absence of an install section or a declared dependency list is a gap: users/agents may need to install npm packages, system binaries, or pip packages manually. Calling 'npx' runs remote npm packages transiently, which can execute arbitrary code — this should be explicitly declared and justified.
Credentials
The registry metadata declares no required environment variables or primary credential, yet the instructions use placeholders for API keys and an Authorization Bearer token when calling Google APIs. That discrepancy means the skill may attempt to use credentials not listed by the skill metadata, increasing the chance of accidental credential exposure or misconfiguration. Otherwise, the skill does not request unrelated secrets.
Persistence & Privilege
The skill is not marked always: true and uses default invocation settings. It does not request persistent system-wide changes or modify other skills' configs in SKILL.md, so its requested level of presence appears appropriate.
What to consider before installing
This instruction-only SEO skill appears to do what it says, but several practical and security gaps are present. Before installing or using it: (1) confirm and supply missing dependencies and credentials explicitly — add required binaries (curl, openssl, xmllint, node/npx) and a pip requirements list (requests, beautifulsoup4, textstat) or install steps; (2) provide a secure place for API keys/tokens (do not paste them into chat) and ensure the skill metadata lists any required env vars; (3) be cautious about running 'npx lighthouse' because npx fetches and executes packages from npm at runtime — prefer a pinned, audited install or a vendor-provided binary; (4) verify network behavior: the skill will fetch arbitrary websites and call Google APIs, so ensure you trust the agent/runtime environment's network access and that no sensitive credentials will be sent to untrusted endpoints; (5) ask the publisher (or update the SKILL.md) to reconcile declared requirements with actual instructions (dependencies and credential needs) — that will make the skill coherent and safer to run.Like a lobster shell, security has layers — review code before you run it.
Current versionv1.0.0
Download ziplatest
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
SKILL.md
SEO Audit
Comprehensive SEO analysis for content and websites.
Quick Audit Checklist
On-Page SEO
- [ ] Title tag (50-60 chars, keyword near front)
- [ ] Meta description (150-160 chars, compelling)
- [ ] H1 tag (one per page, includes target keyword)
- [ ] H2-H6 hierarchy (logical structure)
- [ ] Image alt text (descriptive, keyword-relevant)
- [ ] Internal links (3-5 per page minimum)
- [ ] URL structure (short, descriptive, hyphens)
- [ ] Canonical tags (prevent duplicate content)
Technical SEO
- [ ] Page speed (<3s load time)
- [ ] Mobile-friendly (responsive design)
- [ ] HTTPS (SSL certificate valid)
- [ ] XML sitemap (submitted to Search Console)
- [ ] Robots.txt (properly configured)
- [ ] Structured data (Schema.org markup)
- [ ] Core Web Vitals (LCP, FID, CLS)
Content Quality
- [ ] Keyword in first 100 words
- [ ] Content length matches intent
- [ ] No keyword stuffing (<2% density)
- [ ] Readable (Flesch-Kincaid score)
- [ ] Unique value (not duplicate content)
- [ ] Fresh content (updated regularly)
Technical SEO Commands
Page Speed Analysis
# Using PageSpeed Insights API
curl "https://www.googleapis.com/pagespeedonline/v5/runPagespeed?url=https://example.com&strategy=mobile"
# Using lighthouse locally
npx lighthouse https://example.com --view
Check Robots.txt
curl https://example.com/robots.txt
Check Sitemap
curl https://example.com/sitemap.xml | xmllint --format -
SSL Certificate Check
openssl s_client -connect example.com:443 -servername example.com 2>/dev/null | openssl x509 -noout -dates
Mobile-Friendly Test
# Google's Mobile-Friendly Test API
curl "https://searchconsole.googleapis.com/v1/urlTestingTools/mobileFriendlyTest:run?key=YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"url":"https://example.com"}'
Content Analysis
Keyword Density
import re
from collections import Counter
def keyword_density(text, keyword):
words = re.findall(r'\b\w+\b', text.lower())
keyword_count = text.lower().count(keyword.lower())
density = (keyword_count / len(words)) * 100
return {
"keyword": keyword,
"count": keyword_count,
"total_words": len(words),
"density": f"{density:.2f}%"
}
# Target: 1-2% density
Readability Score
import textstat
text = "Your content here..."
flesch = textstat.flesch_reading_ease(text)
# 90-100: Very Easy
# 60-70: Standard
# 0-30: Very Difficult
grade = textstat.flesch_kincaid_grade(text)
# Target: 8-9 for general audience
Content Structure Analysis
from bs4 import BeautifulSoup
def analyze_headings(html):
soup = BeautifulSoup(html, 'html.parser')
headings = {
'h1': soup.find_all('h1'),
'h2': soup.find_all('h2'),
'h3': soup.find_all('h3'),
}
issues = []
if len(headings['h1']) == 0:
issues.append("Missing H1 tag")
elif len(headings['h1']) > 1:
issues.append("Multiple H1 tags (should be one)")
return {
"counts": {k: len(v) for k, v in headings.items()},
"issues": issues
}
Meta Tag Analysis
Extract Meta Tags
from bs4 import BeautifulSoup
import requests
def audit_meta_tags(url):
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
title = soup.find('title')
description = soup.find('meta', attrs={'name': 'description'})
keywords = soup.find('meta', attrs={'name': 'keywords'})
issues = []
if not title or len(title.text) < 30:
issues.append("Title too short or missing")
elif len(title.text) > 60:
issues.append("Title too long (>60 chars)")
if not description:
issues.append("Meta description missing")
elif len(description.get('content', '')) < 120:
issues.append("Meta description too short")
elif len(description.get('content', '')) > 160:
issues.append("Meta description too long")
return {
"title": title.text if title else None,
"description": description.get('content') if description else None,
"issues": issues
}
Structured Data Check
Validate Schema Markup
# Using Google's Rich Results Test
curl "https://searchconsole.googleapis.com/v1/urlTestingTools/richResultsTest:run" \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{"url":"https://example.com"}'
Common Schema Types
// Article
{
"@context": "https://schema.org",
"@type": "Article",
"headline": "Title",
"author": {"@type": "Person", "name": "Author"},
"datePublished": "2026-01-01"
}
// Local Business
{
"@context": "https://schema.org",
"@type": "LocalBusiness",
"name": "Business Name",
"address": {"@type": "PostalAddress", "streetAddress": "123 Main"},
"telephone": "+1-555-555-5555"
}
// FAQ
{
"@context": "https://schema.org",
"@type": "FAQPage",
"mainEntity": [{
"@type": "Question",
"name": "Question?",
"acceptedAnswer": {"@type": "Answer", "text": "Answer"}
}]
}
Competitor Analysis
Compare Page Metrics
import requests
from bs4 import BeautifulSoup
def compare_seo(target_url, competitor_url):
def get_metrics(url):
r = requests.get(url)
soup = BeautifulSoup(r.text, 'html.parser')
return {
"title_len": len(soup.find('title').text) if soup.find('title') else 0,
"h1_count": len(soup.find_all('h1')),
"h2_count": len(soup.find_all('h2')),
"word_count": len(soup.get_text().split()),
"images": len(soup.find_all('img')),
"images_no_alt": len([i for i in soup.find_all('img') if not i.get('alt')])
}
return {
"target": get_metrics(target_url),
"competitor": get_metrics(competitor_url)
}
SEO Audit Report Template
# SEO Audit Report
## Summary
- **Score:** X/100
- **Critical Issues:** X
- **Warnings:** X
- **Passed:** X
## Critical Issues
1. [Issue description]
- Impact: [High/Medium/Low]
- Fix: [Recommended action]
## Technical SEO
| Factor | Status | Notes |
|--------|--------|-------|
| Page Speed | ⚠️ | 4.2s load time |
| Mobile | ✅ | Responsive |
| HTTPS | ✅ | Valid SSL |
| Sitemap | ✅ | Submitted |
## On-Page SEO
| Factor | Status | Notes |
|--------|--------|-------|
| Title | ✅ | 55 chars |
| Meta Desc | ⚠️ | Too short |
| H1 | ✅ | Present |
| Images | ⚠️ | 3 missing alt |
## Recommendations
1. [Priority 1]
2. [Priority 2]
3. [Priority 3]
Files
1 totalSelect a file
Select a file to preview.
Comments
Loading comments…
