Install
openclaw skills install fenz-skill-auditorAudits Claude skills from GitHub repositories for effectiveness, token usage, safety, and best-practice compliance, then automatically generates bilingual social media posts about the findings. Use when the user wants to audit a skill, review a skill from GitHub, analyze a SKILL.md, evaluate skill quality, or check a skill for safety and permission issues.
openclaw skills install fenz-skill-auditorAudit a Claude skill from a GitHub repository. Evaluate effectiveness, token usage, time complexity, permissions, safety, and best-practice compliance. Produce a structured audit report.
Run the clone script with the user-provided GitHub URL:
bash scripts/clone_and_extract.sh <repo-url>
The script outputs JSON listing all SKILL.md files found. If multiple skills exist in the repo, present the list to the user and ask which one(s) to audit.
If the script exits with a non-zero code:
Create the audit output directory:
audits/<skill-name>-<YYYYMMDD-HHMMSS>/
Write metadata.json with:
{
"repo_url": "<url>",
"timestamp": "<ISO 8601>",
"auditor": "Fenz.AI",
"skill_name": "<name>",
"skill_path": "<path within repo>"
}
Copy all files from the skill directory (the directory containing SKILL.md and its subdirectories) into source/ within the output directory. Then clean up the temp clone directory.
Read references/audit-criteria.md for detailed rubrics. Evaluate each category:
Read the skill's SKILL.md and evaluate:
Rate: Strong / Adequate / Weak
Run the analysis script:
python3 scripts/analyze_tokens.py <source-dir>
Use the JSON output to assess:
Rate: Low / Medium / High
Evaluate the workflow for:
Rate: Quick / Moderate / Extended
Check the skill for:
allowed-tools in frontmatter — what tools are requested?Flag any red flags. Rate: Minimal / Moderate / Broad
Evaluate:
Rate: Low Risk / Medium Risk / High Risk
Read references/skill-best-practices.md and check the skill against each item. Group findings by priority:
Read assets/audit-report-template.md and fill in all template fields with the analysis results. Save as audit-report.md in the output directory.
Include:
Maintain process-log.md in the output directory. Append each step as it completes:
## [YYYY-MM-DD HH:MM:SS] Step N: <step name>
- Status: success/failed/skipped
- Details: <what happened>
- Errors: <if any>
Automatically generate posts from the audit report.
python3 ../post-generator/scripts/extract_findings.py <audit-dir>/audit-report.md../post-generator/references/writing-guide-en.md and ../post-generator/assets/post-template-twitter-en.md../post-generator/references/writing-guide-zh.md and ../post-generator/assets/post-template-twitter-zh.mdposts-en.md and posts-zh.md in the audit output directoryprocess-log.mdQuality rules: