Writing Credibility Auditor

v1.0.0

Audit any piece of writing for missing citations, unsupported claims, logical fallacies, weasel words, and misleading statistics — then produce a structured...

0· 20·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name/description (credibility auditing) match the SKILL.md and README: the skill performs language- and reasoning-based scans (fallacies, unsupported claims, weasel words, misleading stats). It requests no unrelated credentials, binaries, or configuration.
Instruction Scope
SKILL.md contains explicit, bounded instructions for analyzing user-supplied text and specifies it does not perform live web searches or other external data collection. It does not instruct the agent to read system files, environment variables, or transmit data to external endpoints.
Install Mechanism
No install spec and no code files — the skill is instruction-only, so nothing is downloaded or written to disk. This is low-risk and proportionate for a reasoning-only auditor.
Credentials
The skill declares no required env vars, credentials, or config paths. That is appropriate for a pure-reasoning text audit and there are no hidden requests in SKILL.md or README.
Persistence & Privilege
always:false (default) and model invocation is allowed (disable-model-invocation:false). Autonomous invocation is platform-default and is not in itself a security concern here — just be aware the agent can call the skill when user intent matches the triggers.
Assessment
This skill is internally consistent and low-risk because it only contains written instructions for analyzing text and requests no installs, files, or credentials. Before installing, note these practical limitations: the skill does not perform live fact-checking or verify external sources (it audits reasoning and language only), so outputs that appear to cite studies or assert factual corrections should be double-checked against primary sources. Also be aware that purely model-driven detection can produce false positives/negatives and may sometimes overconfidently label passages — treat the Credibility Report as guidance, not authoritative proof. If you need authoritative fact verification, prefer a skill that explicitly uses vetted web APIs or human expert review. Finally, if you later see this skill ask for environment variables, external URLs, or to run system commands, stop and re-evaluate — those would be unexpected for this skill and would raise concern.

Like a lobster shell, security has layers — review code before you run it.

latestvk975xfme7bneqc0t90kf7vxzj98477yh

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

🔍 Clawdis

Comments