Content Quality Auditor

This skill should be used when the user asks to "audit content quality", "EEAT score", "E-E-A-T audit", "content quality check", "CORE-EEAT audit", "helpful...

MIT-0 · Free to use, modify, and redistribute. No attribution required.
1 · 1.7k · 8 current installs · 8 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (80-item CORE‑EEAT audit, GEO/SEO scoring, prioritized fixes) match the SKILL.md content. The skill only needs page text/URLs and (optionally) SEO tool connectors for richer signals; it does not declare unrelated credentials or binaries.
Instruction Scope
SKILL.md instructs the agent to evaluate provided content or fetch public URLs (with WebFetch). It documents which site-level signals will be marked N/A if unavailable. There are no instructions to read unrelated system files, harvest environment variables, or send data to unexpected external endpoints beyond normal web fetching and optional SEO integrations.
Install Mechanism
Instruction-only skill with no install spec and no code files — nothing is written to disk or downloaded by the skill itself.
Credentials
The skill declares no required environment variables, credentials, or config paths. Metadata notes optional SEO/SEO-tool connectors for richer data, which is proportional to the described capability and marked optional.
Persistence & Privilege
always is false and the skill does not request persistent/system-wide privileges. Autonomous invocation is allowed (platform default) but not combined with other high-risk factors.
Assessment
This skill appears to do what it says: audit content against an 80‑item CORE‑EEAT benchmark. Before using it, avoid pasting private or sensitive content (PII, credentials, proprietary documents) unless you intend to share them for review. If you connect third‑party SEO tools or allow network access, confirm those integrations separately (they're optional). When providing file paths, prefer pasting content or uploading rather than granting broad filesystem access. Otherwise, this instruction‑only skill is coherent and proportional to its purpose.

Like a lobster shell, security has layers — review code before you run it.

Current versionv3.0.0
Download zip
latestvk975z8wfhxrwnmahrtt3d8tn9h829k8w

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Content Quality Auditor

Based on CORE-EEAT Content Benchmark. Full benchmark reference: references/core-eeat-benchmark.md

SEO & GEO Skills Library · 20 skills for SEO + GEO · Install all: npx skills add aaron-he-zhu/seo-geo-claude-skills

<details> <summary>Browse all 20 skills</summary>

Research · keyword-research · competitor-analysis · serp-analysis · content-gap-analysis

Build · seo-content-writer · geo-content-optimizer · meta-tags-optimizer · schema-markup-generator

Optimize · on-page-seo-auditor · technical-seo-checker · internal-linking-optimizer · content-refresher

Monitor · rank-tracker · backlink-analyzer · performance-reporter · alert-manager

Cross-cutting · content-quality-auditor · domain-authority-auditor · entity-optimizer · memory-management

</details>

This skill evaluates content quality across 80 standardized criteria organized in 8 dimensions. It produces a comprehensive audit report with per-item scoring, dimension and system scores, weighted totals by content type, and a prioritized action plan.

When to Use This Skill

  • Auditing content quality before publishing
  • Evaluating existing content for improvement opportunities
  • Benchmarking content against CORE-EEAT standards
  • Comparing content quality against competitors
  • Assessing both GEO readiness (AI citation potential) and SEO strength (source credibility)
  • Running periodic content quality checks as part of a content maintenance program
  • After writing or optimizing content with seo-content-writer or geo-content-optimizer

What This Skill Does

  1. Full 80-Item Audit: Scores every CORE-EEAT check item as Pass/Partial/Fail
  2. Dimension Scoring: Calculates scores for all 8 dimensions (0-100 each)
  3. System Scoring: Computes GEO Score (CORE) and SEO Score (EEAT)
  4. Weighted Totals: Applies content-type-specific weights for final score
  5. Veto Detection: Flags critical trust violations (T04, C01, R10)
  6. Priority Ranking: Identifies Top 5 improvements sorted by impact
  7. Action Plan: Generates specific, actionable improvement steps

How to Use

Audit Content

Audit this content against CORE-EEAT: [content text or URL]
Run a content quality audit on [URL] as a [content type]

Audit with Content Type

CORE-EEAT audit for this product review: [content]
Score this how-to guide against the 80-item benchmark: [content]

Comparative Audit

Audit my content vs competitor: [your content] vs [competitor content]

Data Sources

See CONNECTORS.md for tool category placeholders.

With ~~web crawler + ~~SEO tool connected: Automatically fetch page content, extract HTML structure, check schema markup, verify internal/external links, and pull competitor content for comparison.

With manual data only: Ask the user to provide:

  1. Content text, URL, or file path
  2. Content type (if not auto-detectable): Product Review, How-to Guide, Comparison, Landing Page, Blog Post, FAQ Page, Alternative, Best-of, or Testimonial
  3. Optional: competitor content for benchmarking

Proceed with the full 80-item audit using provided data. Note in the output which items could not be fully evaluated due to missing access (e.g., backlink data, schema markup, site-level signals).

Instructions

When a user requests a content quality audit:

Step 1: Preparation

### Audit Setup

**Content**: [title or URL]
**Content Type**: [auto-detected or user-specified]
**Dimension Weights**: [loaded from content-type weight table]

#### Veto Check (Emergency Brake)

| Veto Item | Status | Action |
|-----------|--------|--------|
| T04: Disclosure Statements | ✅ Pass / ⚠️ VETO | [If VETO: "Add disclosure banner at page top immediately"] |
| C01: Intent Alignment | ✅ Pass / ⚠️ VETO | [If VETO: "Rewrite title and first paragraph"] |
| R10: Content Consistency | ✅ Pass / ⚠️ VETO | [If VETO: "Verify all data before publishing"] |

If any veto item triggers, flag it prominently at the top of the report and recommend immediate action before continuing the full audit.

Step 2: CORE Audit (40 items)

Evaluate each item against the criteria in references/core-eeat-benchmark.md.

Score each item:

  • Pass = 10 points (fully meets criteria)
  • Partial = 5 points (partially meets criteria)
  • Fail = 0 points (does not meet criteria)
### C — Contextual Clarity

| ID | Check Item | Score | Notes |
|----|-----------|-------|-------|
| C01 | Intent Alignment | Pass/Partial/Fail | [specific observation] |
| C02 | Direct Answer | Pass/Partial/Fail | [specific observation] |
| ... | ... | ... | ... |
| C10 | Semantic Closure | Pass/Partial/Fail | [specific observation] |

**C Score**: [X]/100

Repeat the same table format for O (Organization), R (Referenceability), and E (Exclusivity), scoring all 10 items per dimension.

Step 3: EEAT Audit (40 items)

### Exp — Experience

| ID | Check Item | Score | Notes |
|----|-----------|-------|-------|
| Exp01 | First-Person Narrative | Pass/Partial/Fail | [specific observation] |
| ... | ... | ... | ... |

**Exp Score**: [X]/100

Repeat the same table format for Ept (Expertise), A (Authority), and T (Trust), scoring all 10 items per dimension.

See references/item-reference.md for the complete 80-item ID lookup table and site-level item handling notes.

Step 4: Scoring & Report

Calculate scores and generate the final report:

## CORE-EEAT Audit Report

### Overview

- **Content**: [title]
- **Content Type**: [type]
- **Audit Date**: [date]
- **Total Score**: [score]/100 ([rating])
- **GEO Score**: [score]/100 | **SEO Score**: [score]/100
- **Veto Status**: ✅ No triggers / ⚠️ [item] triggered

### Dimension Scores

| Dimension | Score | Rating | Weight | Weighted |
|-----------|-------|--------|--------|----------|
| C — Contextual Clarity | [X]/100 | [rating] | [X]% | [X] |
| O — Organization | [X]/100 | [rating] | [X]% | [X] |
| R — Referenceability | [X]/100 | [rating] | [X]% | [X] |
| E — Exclusivity | [X]/100 | [rating] | [X]% | [X] |
| Exp — Experience | [X]/100 | [rating] | [X]% | [X] |
| Ept — Expertise | [X]/100 | [rating] | [X]% | [X] |
| A — Authority | [X]/100 | [rating] | [X]% | [X] |
| T — Trust | [X]/100 | [rating] | [X]% | [X] |
| **Weighted Total** | | | | **[X]/100** |

**Score Calculation**:
- GEO Score = (C + O + R + E) / 4
- SEO Score = (Exp + Ept + A + T) / 4
- Weighted Score = Σ (dimension_score × content_type_weight)

**Rating Scale**: 90-100 Excellent | 75-89 Good | 60-74 Medium | 40-59 Low | 0-39 Poor

### N/A Item Handling

When an item cannot be evaluated (e.g., A01 Backlink Profile requires site-level data not available):

1. Mark the item as "N/A" with reason
2. Exclude N/A items from the dimension score calculation
3. Dimension Score = (sum of scored items) / (number of scored items x 10) x 100
4. If more than 50% of a dimension's items are N/A, flag the dimension as "Insufficient Data" and exclude it from the weighted total
5. Recalculate weighted total using only dimensions with sufficient data, re-normalizing weights to sum to 100%

**Example**: Authority dimension with 8 N/A items and 2 scored items (A05=8, A07=5):
- Dimension score = (8+5) / (2 x 10) x 100 = 65
- But 8/10 items are N/A (>50%), so flag as "Insufficient Data -- Authority"
- Exclude A dimension from weighted total; redistribute its weight proportionally to remaining dimensions

### Per-Item Scores

#### CORE — Content Body (40 Items)

| ID | Check Item | Score | Notes |
|----|-----------|-------|-------|
| C01 | Intent Alignment | [Pass/Partial/Fail] | [observation] |
| C02 | Direct Answer | [Pass/Partial/Fail] | [observation] |
| ... | ... | ... | ... |

#### EEAT — Source Credibility (40 Items)

| ID | Check Item | Score | Notes |
|----|-----------|-------|-------|
| Exp01 | First-Person Narrative | [Pass/Partial/Fail] | [observation] |
| ... | ... | ... | ... |

### Top 5 Priority Improvements

Sorted by: weight × points lost (highest impact first)

1. **[ID] [Name]** — [specific modification suggestion]
   - Current: [Fail/Partial] | Potential gain: [X] weighted points
   - Action: [concrete step]

2. **[ID] [Name]** — [specific modification suggestion]
   - Current: [Fail/Partial] | Potential gain: [X] weighted points
   - Action: [concrete step]

3–5. [Same format]

### Action Plan

#### Quick Wins (< 30 minutes each)
- [ ] [Action 1]
- [ ] [Action 2]

#### Medium Effort (1-2 hours)
- [ ] [Action 3]
- [ ] [Action 4]

#### Strategic (Requires planning)
- [ ] [Action 5]
- [ ] [Action 6]

### Recommended Next Steps

- For full content rewrite: use [seo-content-writer](../../build/seo-content-writer/) with CORE-EEAT constraints
- For GEO optimization: use [geo-content-optimizer](../../build/geo-content-optimizer/) targeting failed GEO-First items
- For content refresh: use [content-refresher](../../optimize/content-refresher/) with weak dimensions as focus
- For technical fixes: run `/seo:check-technical` for site-level issues

Validation Checkpoints

Input Validation

  • Content source identified (text, URL, or file path)
  • Content type confirmed (auto-detected or user-specified)
  • Content is substantial enough for meaningful audit (≥300 words)
  • If comparative audit, competitor content also provided

Output Validation

  • All 80 items scored (or marked N/A with reason)
  • All 8 dimension scores calculated correctly
  • Weighted total matches content-type weight configuration
  • Veto items checked and flagged if triggered
  • Top 5 improvements sorted by weighted impact, not arbitrary
  • Every recommendation is specific and actionable (not generic advice)
  • Action plan includes concrete steps with effort estimates

Example

See references/item-reference.md for a complete scored example showing the C dimension with all 10 items, priority improvements, and weighted scoring.

Tips for Success

  1. Start with veto items — T04, C01, R10 are deal-breakers regardless of total score

    These veto items are consistent with the CORE-EEAT benchmark (Section 3), which defines them as items that can override the overall score.

  2. Focus on high-weight dimensions — Different content types prioritize different dimensions
  3. GEO-First items matter most for AI visibility — Prioritize items tagged GEO 🎯 if AI citation is the goal
  4. Some EEAT items need site-level data — Don't penalize content for things only observable at the site level (backlinks, brand recognition)
  5. Use the weighted score, not just the raw average — A product review with strong Exclusivity matters more than strong Authority
  6. Re-audit after improvements — Run again to verify score improvements and catch regressions
  7. Pair with CITE for domain-level context — A high content score on a low-authority domain signals a different priority than the reverse; run domain-authority-auditor for the full 120-item picture

Reference Materials

Related Skills

Files

2 total
Select a file
Select a file to preview.

Comments

Loading comments…