Aeo Toolkit

v2.0.2

Audit any website's visibility to AI agents and generate every file needed to fix it. Detects JS rendering gaps, missing structured data, blocked AI crawlers...

0· 20·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
Capability signals
CryptoCan make purchasesRequires OAuth tokenRequires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name and description (audit AI visibility, generate robots/llms/JSON-LD templates) match the included scripts and output templates. The single provided Python script implements crawling, parsing, scoring, and template generation — all consistent with the stated purpose. No unexpected credentials, binaries, or unrelated capabilities are requested.
Instruction Scope
SKILL.md instructs the agent to run the included Python crawler against user-supplied website URLs and to produce human-readable reports and files. The instructions do not ask the agent to read local secrets, system config, or post data to external endpoints other than the target websites. Note: the tool performs network fetches of arbitrary URLs (expected for a crawler) and writes generated files to workspace — review outputs before publishing them to a domain you don't control.
Install Mechanism
No install spec; code is instruction-only plus a single Python stdlib script. The script claims to use only Python standard library (urllib, html.parser, etc.), so there is no package download or archive extraction in the install process — low install risk.
Credentials
The skill requires no environment variables, credentials, or config paths. The crawler needs network access to fetch target sites (coherent with its purpose) but does not attempt to access system secrets or external APIs requiring keys.
Persistence & Privilege
always:false (default) and autonomous invocation is allowed (platform default). The skill does not request persistent system-level presence or attempt to modify other skills/configs. It writes generated files (robots.txt, llms.txt, JSON-LD templates) to the workspace, which is expected.
Scan Findings in Context
[pre-scan-injection] expected: No regex-based injection findings or pre-scan signals were detected. This is expected for a small, pure-Python crawler and instruction-only skill.
Assessment
This skill appears to do what it says: it crawls public websites and generates audit reports plus drop-in/template files. Before running/installing: 1) Only audit sites you own or have permission to scan; the script performs live HTTP requests. 2) Review generated robots.txt, llms.txt, agents-brief.txt, and any JSON-LD templates before uploading them to a live site — they contain placeholders and example content (the repository includes 99rebels-specific templates). 3) Be cautious with flags like --no-ssl-verify and with deep/large crawls against third-party sites. 4) Because the tool writes files to your workspace, inspect outputs for unexpected content or placeholders that could leak sensitive info if published. Overall: coherent and low-risk, but follow normal operational hygiene (run in a safe environment and verify outputs).

Like a lobster shell, security has layers — review code before you run it.

latestvk97ez6a6t38qwpshxtdtjekg8h8528gc

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments