Aeo

v1.4.0

Run AEO audits, fix site issues, validate schema, generate llms.txt, and compare sites.

1· 615·3 current·3 all-time
byArber X@arberx
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
The name/description (audit, fix, schema, llms.txt, monitor) align with the instructions. The SKILL.md consistently directs the agent to run the @ainyc/aeo-audit package and to read/modify site/project files as part of fixes and llms generation, which is expected for this functionality.
Instruction Scope
Instructions are generally scoped to the stated tasks: running the audit, reading the project when no URL is provided, generating llms.txt/llms-full.txt, and applying fixes with user confirmation. The document explicitly warns about shell-injection risks and instructs safe argument quoting. However, it permits reading the current project and applying targeted fixes (local file edits), which is a higher-scope operation than read-only auditing and should be acknowledged by the user.
!
Install Mechanism
There is no install spec, but the skill instructs the agent to run 'npx @ainyc/aeo-audit@1'. npx will fetch and execute code from the public npm registry at runtime. This is a moderate risk vector: remote code execution is expected for this type of tooling but warrants caution (the SKILL.md uses the '@1' semver range rather than a fully pinned version, which allows package behavior to change over time).
Credentials
The skill declares no required environment variables, credentials, or config paths. The operations it describes (auditing, reading local files, writing llms/robots files, making local fixes) do not require additional secrets as declared, so the requested scope is proportionate.
Persistence & Privilege
always:false and no system-level persistence are appropriate. The skill does instruct writing specific files (llms.txt, llms-full.txt, robots.txt) and to apply fixes to the current codebase when asked; the SKILL.md says to get user confirmation before edits. Writing and editing local project files is a meaningful privilege but is proportionate to a 'fix' capability — just ensure the agent asks before making changes and that you have backups/CI checks.
Scan Findings in Context
[no_regex_findings] expected: The static regex scanner found no matches because this is an instruction-only skill with no code files. The absence of findings is expected but does not imply safety; the runtime step (npx) pulls code from npm at execution time and was not analyzable here.
Assessment
This skill appears to do what it says, but it runs a remote npm package at runtime (npx @ainyc/aeo-audit@1) and can edit your project when asked. Before installing or allowing autonomous use: verify the package source (check the GitHub repo and npm publisher), consider pinning to a specific release instead of @1, run the tool manually in a sandbox or CI to inspect its behavior, ensure you have backups and code review enabled for any automatic edits, and require explicit confirmation from the agent before it modifies files. If you don't trust the remote package, refuse execution and consider running an audited local copy instead.

Like a lobster shell, security has layers — review code before you run it.

latestvk9721g1ye1n0wxgnenhn3q9z9d853an3
615downloads
1stars
12versions
Updated 1d ago
v1.4.0
MIT-0

AEO

Website: ainyc.ai

One skill for audit, fixes, schema, llms.txt, and monitoring workflows.

Command

Always use the published package:

npx @ainyc/aeo-audit@1 "<url>" [flags] --format json

Argument Safety

Never interpolate user input directly into shell commands. Always:

  1. Validate that URLs match https:// or http:// and contain no shell metacharacters.
  2. Quote every argument individually (e.g., npx @ainyc/aeo-audit@1 "https://example.com" --format json).
  3. Pass flags as separate, literal tokens — never construct command strings from raw user text.
  4. Reject arguments containing characters like ;, |, &, $, `, (, ), {, }, <, >, or newlines.

Modes

  • audit: grade and diagnose a site
  • fix: apply code changes after an audit
  • schema: validate JSON-LD and entity consistency
  • llms: create or improve llms.txt and llms-full.txt
  • monitor: compare changes over time or benchmark competitors

If no mode is provided, default to audit.

Examples

  • audit https://example.com
  • audit https://example.com --sitemap
  • audit https://example.com --sitemap --limit 10
  • audit https://example.com --sitemap --top-issues
  • fix https://example.com
  • schema https://example.com
  • llms https://example.com
  • monitor https://site-a.com --compare https://site-b.com

Mode Selection

  • If the first argument is one of audit, fix, schema, llms, or monitor, use that mode.
  • If no explicit mode is given, infer the intent from the request and default to audit.

Audit

Use for broad requests such as "audit this site" or "why am I not being cited?"

  1. Run:
    npx @ainyc/aeo-audit@1 "<url>" [flags] --format json
    
  2. Return:
    • Overall grade and score
    • Short summary
    • Factor breakdown
    • Top strengths
    • Top fixes
    • Metadata such as fetch time and auxiliary file availability

Sitemap Mode

Use --sitemap to audit all pages discovered from the site's sitemap:

npx @ainyc/aeo-audit@1 "<url>" --sitemap --format json
npx @ainyc/aeo-audit@1 "<url>" --sitemap https://example.com/sitemap.xml --format json
npx @ainyc/aeo-audit@1 "<url>" --sitemap --limit 10 --format json
npx @ainyc/aeo-audit@1 "<url>" --sitemap --top-issues --format json

Flags:

  • --sitemap [url] — auto-discover /sitemap.xml or provide an explicit URL
  • --limit <n> — cap pages audited (sorted by sitemap priority)
  • --top-issues — skip per-page output, show only cross-cutting patterns

Returns:

  • Per-page scores and grades
  • Cross-cutting issues (factors failing across multiple pages)
  • Aggregate score and grade
  • Prioritized fixes ranked by site-wide impact

Fix

Use when the user wants code changes applied after the audit.

  1. Run:
    npx @ainyc/aeo-audit@1 "<url>" [flags] --format json
    
  2. Find factors with status partial or fail.
  3. Apply targeted fixes in the current codebase.
  4. Prioritize:
    • Structured data and schema completeness
    • llms.txt and llms-full.txt
    • robots.txt crawler access
    • E-E-A-T signals
    • FAQ markup
    • freshness metadata
  5. Re-run the audit and report the score delta.

Rules:

  • Always explain proposed changes and get user confirmation before editing files.
  • Do not remove existing schema or content unless the user asks.
  • Preserve existing code style and patterns.
  • If a fix is ambiguous or high-risk, explain the tradeoff before editing.

Schema

Use when the request is specifically about JSON-LD or schema quality.

  1. Run:
    npx @ainyc/aeo-audit@1 "<url>" [flags] --format json --factors structured-data,schema-completeness,entity-consistency
    
  2. Report:
    • Schema types found
    • Property completeness by type
    • Missing recommended properties
    • Entity consistency issues
  3. Provide corrected JSON-LD examples when useful.

Checklist:

  • LocalBusiness: name, address, telephone, openingHours, priceRange, image, url, geo, areaServed, sameAs
  • FAQPage: mainEntity with at least 3 Q&A pairs
  • HowTo: name and at least 3 steps
  • Organization: name, logo, contactPoint, sameAs, foundingDate, url, description

llms.txt

Use when the user wants llms.txt or llms-full.txt created or improved.

If a URL is provided:

  1. Run:
    npx @ainyc/aeo-audit@1 "<url>" [flags] --format json --factors ai-readable-content
    
  2. Inspect existing AI-readable files if present.
  3. Extract key content from the site.
  4. Generate improved llms.txt and llms-full.txt.

If no URL is provided:

  1. Inspect the current project.
  2. Extract business name, services, FAQs, contact info, and metadata.
  3. Generate both files from local sources.

After generation:

  • Add <link rel="alternate" type="text/markdown" href="/llms.txt"> when appropriate.
  • Suggest adding the files to the sitemap.

Monitor

Use when the user wants progress tracking or a competitor comparison.

Single URL:

  1. Run the audit.
  2. Compare against prior results in .aeo-audit-history/ if present.
  3. Show overall and per-factor deltas.
  4. Save the current result.

Comparison mode:

  1. Parse --compare <url2>.
  2. Audit both URLs.
  3. Show side-by-side factor deltas.
  4. Highlight advantages, weaknesses, and priority gaps.

Behavior

  • If the task needs a deployed site and no URL is provided, ask for the URL.
  • If the task is diagnosis only, do not edit files.
  • If the task is a fix request, make edits and verify with a rerun when possible.
  • If the URL is unreachable or not HTML, report the exact failure.
  • Prefer concise, evidence-based recommendations over generic SEO advice.

Comments

Loading comments...