Install
openclaw skills install aeoRun AEO audits, fix site issues, validate schema, generate llms.txt, and compare sites.
openclaw skills install aeoWebsite: ainyc.ai
One skill for audit, fixes, schema, llms.txt, and monitoring workflows.
Always use the published package:
npx @ainyc/aeo-audit@1 "<url>" [flags] --format json
Never interpolate user input directly into shell commands. Always:
https:// or http:// and contain no shell metacharacters.npx @ainyc/aeo-audit@1 "https://example.com" --format json).;, |, &, $, `, (, ), {, }, <, >, or newlines.audit: grade and diagnose a sitefix: apply code changes after an auditschema: validate JSON-LD and entity consistencyllms: create or improve llms.txt and llms-full.txtmonitor: compare changes over time or benchmark competitorsdetect-platform: identify the CMS, site builder, framework, or hosting stack a site usesIf no mode is provided, default to audit.
audit https://example.comaudit https://example.com --sitemapaudit https://example.com --sitemap --limit 10audit https://example.com --sitemap --top-issuesfix https://example.comschema https://example.comllms https://example.commonitor https://site-a.com --compare https://site-b.comdetect-platform https://example.comdetect-platform https://example.com --min-confidence highdetect-platform --urls competitors.txtdetect-platform --urls https://a.com,https://b.comaudit, fix, schema, llms, monitor, or detect-platform, use that mode.audit.Use for broad requests such as "audit this site" or "why am I not being cited?"
npx @ainyc/aeo-audit@1 "<url>" [flags] --format json
Use --sitemap to audit all pages discovered from the site's sitemap:
npx @ainyc/aeo-audit@1 "<url>" --sitemap --format json
npx @ainyc/aeo-audit@1 "<url>" --sitemap https://example.com/sitemap.xml --format json
npx @ainyc/aeo-audit@1 "<url>" --sitemap --limit 10 --format json
npx @ainyc/aeo-audit@1 "<url>" --sitemap --top-issues --format json
Flags:
--sitemap [url] — auto-discover /sitemap.xml or provide an explicit URL--limit <n> — cap pages audited (default 200, sorted by sitemap priority)--top-issues — skip per-page output, show only cross-cutting patternsPages are audited with bounded concurrency (5 in flight) to avoid hammering the target origin.
Returns:
Use --detect-platform when the user wants to know what stack a site is built on (e.g., "is this WordPress?", "what framework does competitor X use?", "is this site custom-built?"). This is much faster than a full audit because it skips analyzer scoring.
npx @ainyc/aeo-audit@1 "<url>" --detect-platform --format json
npx @ainyc/aeo-audit@1 "<url>" --detect-platform --min-confidence high --format json
Flags:
--detect-platform — switch to detection mode instead of auditing--min-confidence <lvl> — filter to low (default), medium, or high confidence--urls <src> — run on multiple URLs at once (file path, comma-separated list, or - for stdin)--concurrency <n> — max in-flight fetches in batch mode (default 5)The report groups detections by category (CMS, site builder, e-commerce, framework, SSG, hosting), each with a confidence bucket, a 0–100 score, an optional version, and the signals that matched. When the report's isCustom flag is true, no CMS/site-builder/e-commerce platform was identified — the site is likely custom-built. Exit code is 0 when at least one platform is detected, 1 otherwise.
When the user wants to fingerprint many sites at once (competitor lists, customer cohorts), pass --urls:
npx @ainyc/aeo-audit@1 --detect-platform --urls urls.txt --format json
npx @ainyc/aeo-audit@1 --detect-platform --urls https://a.com,https://b.com --format json
cat urls.txt | npx @ainyc/aeo-audit@1 --detect-platform --urls - --format json
The batch report contains a results array; each entry has status: 'success' or 'error', plus the same shape as a single-URL report on success. Per-URL fetch errors do not abort the run. Exit code is 0 when at least one URL succeeded, 1 otherwise.
Use when the user wants code changes applied after the audit.
npx @ainyc/aeo-audit@1 "<url>" [flags] --format json
partial or fail.llms.txt and llms-full.txtrobots.txt crawler accessRules:
Use when the request is specifically about JSON-LD or schema quality.
npx @ainyc/aeo-audit@1 "<url>" [flags] --format json --factors structured-data,schema-completeness,schema-validity,entity-consistency
@types, JSON parse errors, empty <script> blocks) — surface these prominently regardless of overall score; Google drops invalid blocks silently from rich resultsChecklist:
LocalBusiness: name, address, telephone, openingHours, priceRange, image, url, geo, areaServed, sameAsFAQPage: mainEntity with at least 3 Q&A pairs (and only one FAQPage block per page — duplicates invalidate rich results)HowTo: name and at least 3 steps (singleton — only one per page)Organization: name, logo, contactPoint, sameAs, foundingDate, url, descriptionFAQPage, HowTo, Article, BlogPosting, NewsArticle, BreadcrumbList, Product, RecipeUse when the user wants llms.txt or llms-full.txt created or improved.
If a URL is provided:
npx @ainyc/aeo-audit@1 "<url>" [flags] --format json --factors ai-readable-content
llms.txt and llms-full.txt.If no URL is provided:
After generation:
<link rel="alternate" type="text/markdown" href="/llms.txt"> when appropriate.Use when the user wants progress tracking or a competitor comparison.
Single URL:
.aeo-audit-history/ if present.Comparison mode:
--compare <url2>.