Install
openclaw skills install fact-checkerVerify claims, numbers, and facts in markdown drafts against source data. Use when: reviewing blog posts, reports, or documentation for accuracy before publi...
openclaw skills install fact-checkerLast used: 2026-03-24 Memory references: 1 Status: Active
Given a markdown draft file, this skill extracts every verifiable claim (numbers, dates, model names, scores, causal statements) and cross-references them against available source data to produce a verification report.
python3 skills/fact-checker/scripts/fact_check.py <draft.md>
python3 skills/fact-checker/scripts/fact_check.py <draft.md> --output report.md
model/task (phi4/classify) and model:tag (phi4:latest)YYYY-MM-DD format0.923, 1.00042%, 95.3%projects/hybrid-control-plane/FINDINGS.md — primary source of truth/status API at http://localhost:8765/status — live scored run dataprojects/hybrid-control-plane/data/scores/*.json — raw scored run files on diskmemory/*.md — daily logs with timestamps and decisionsgit log in projects/hybrid-control-plane/ — commit hashes, dates, authorshipprojects/hybrid-control-plane/CHANGELOG.md — sprint historyEach claim produces one line:
✅ CONFIRMED: "phi4/classify scored 1.000" → /status API: phi4_latest_classify mean=1.000 n=23
⚠️ UNVERIFIABLE: "this took about a day" → no timestamp correlation found in logs
❌ CONTRADICTED: "909 runs" → /status API shows 958 total runs (stale number?)
Followed by a summary count of confirmed / unverifiable / contradicted claims.
When asked to "fact-check" or "verify" a draft blog post, report, or documentation file — run this skill and present the report to the user. If any claims are ❌ CONTRADICTED, flag them prominently and suggest corrections.
/status API is unavailable, note it and rely on FINDINGS.md + score files.