Fact Checker

v1.0.4

Verify claims, numbers, and facts in markdown drafts against source data. Use when: reviewing blog posts, reports, or documentation for accuracy before publi...

1· 564·12 current·13 all-time
byNissan Dookeran@nissan
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The SKILL.md and scripts both state that verification comes from FINDINGS.md, local score JSON files, memory/*.md, git history under projects/hybrid-control-plane, and a localhost /status API. The code reads exactly those paths and calls http://localhost:8765/status and git log; these requirements match the stated purpose.
Instruction Scope
Runtime instructions are narrow: run the bundled Python script on a draft file, parse its output, summarize contradictions, and suggest corrections. The script only reads local workspace files, score JSONs, memory logs, and runs a local git command. It does not attempt to read unrelated system config or request external endpoints.
Install Mechanism
There is no install spec (instruction-only skill plus bundled scripts). The SKILL.md lists python3 as a required binary which is appropriate. Nothing is downloaded from the network or installed to disk by the skill itself.
Credentials
The skill does not request environment variables or external credentials. It reads local project files and a localhost API which is proportionate to fact-checking claims against local test/run data. Note: those local files and memory logs may contain sensitive project information — access is justified for the skill's purpose but should be considered by the user.
Persistence & Privilege
The skill does not request always:true or any elevated persistence. It runs as-needed and uses subprocess exec (git) and a local HTTP request; these are normal for this function.
Assessment
This skill appears to do what it claims: it will read files under projects/hybrid-control-plane (FINDINGS.md, data/scores/*.json, CHANGELOG.md), memory/*.md, run git log in that project, and query a local service at http://localhost:8765/status. Before installing/use: (1) confirm you trust the local /status service and the project files it will read (they may contain sensitive data); (2) review the bundled scripts yourself (they are included) — the code is readable and only performs local reads, a git log, and a localhost HTTP GET; (3) be aware the script uses the requests library and a user-specific shebang path in the file header (non-portable but not harmful); (4) run the script on non-sensitive drafts or in an isolated environment if you are unsure. Minor notes: SKILL.md version (1.0.3) differs from registry version (1.0.4) and the script's shebang points to a local /Users/loki pyenv path — these are portability/information hygiene issues, not security blockers.

Like a lobster shell, security has layers — review code before you run it.

latestvk973740ypw07w465n58169qhsn83rksq

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

🔍 Clawdis

SKILL.md

Last used: 2026-03-24 Memory references: 1 Status: Active

Fact-Checker: Verify Markdown Claims Against Source Data

Given a markdown draft file, this skill extracts every verifiable claim (numbers, dates, model names, scores, causal statements) and cross-references them against available source data to produce a verification report.

Usage

python3 skills/fact-checker/scripts/fact_check.py <draft.md>
python3 skills/fact-checker/scripts/fact_check.py <draft.md> --output report.md

What It Checks

Claim types extracted

  • Numeric claims — integers and floats with surrounding context
  • Model referencesmodel/task (phi4/classify) and model:tag (phi4:latest)
  • DatesYYYY-MM-DD format
  • Score values — decimal scores like 0.923, 1.000
  • Percentages42%, 95.3%

Source data consulted (in priority order)

  1. projects/hybrid-control-plane/FINDINGS.md — primary source of truth
  2. Control Plane /status API at http://localhost:8765/status — live scored run data
  3. projects/hybrid-control-plane/data/scores/*.json — raw scored run files on disk
  4. memory/*.md — daily logs with timestamps and decisions
  5. git log in projects/hybrid-control-plane/ — commit hashes, dates, authorship
  6. projects/hybrid-control-plane/CHANGELOG.md — sprint history

Output Format

Each claim produces one line:

✅ CONFIRMED:    "phi4/classify scored 1.000" → /status API: phi4_latest_classify mean=1.000 n=23
⚠️ UNVERIFIABLE: "this took about a day" → no timestamp correlation found in logs
❌ CONTRADICTED: "909 runs" → /status API shows 958 total runs (stale number?)

Followed by a summary count of confirmed / unverifiable / contradicted claims.

When To Use This Skill

When asked to "fact-check" or "verify" a draft blog post, report, or documentation file — run this skill and present the report to the user. If any claims are ❌ CONTRADICTED, flag them prominently and suggest corrections.

Instructions for Agent

  1. Run the script with the path to the draft file.
  2. Parse the output report.
  3. Summarise key findings — especially any ❌ CONTRADICTED claims.
  4. Suggest specific corrections with the correct values from the evidence.
  5. If the /status API is unavailable, note it and rely on FINDINGS.md + score files.

Files

3 total
Select a file
Select a file to preview.

Comments

Loading comments…