Hallucination Guard — 4-Layer AI Fabrication Defense

PassAudited by ClawScan on Mar 3, 2026.

Overview

The skill's instructions, required capabilities, and scope are internally consistent for a hallucination-detection/audit tool; no unrelated credentials or installs are requested — but operators should be aware of data-exposure and privilege considerations when using it.

This skill is coherent and appears to do what it claims: verify agent claims via tool output, spawn auditor subagents, and monitor session history. Before installing, consider: (1) evidence can include file contents — avoid feeding secrets or sensitive logs to spawned auditors or auditors using external models; (2) the skill expects platform privileges (sessions_history, sessions_spawn, subagents steering/killing, exec/read). Grant those only to trusted skills/agents; (3) limit which models auditors may use and log auditor outputs to detect any unexpected disclosure; (4) if you operate in a high-security environment, enforce a policy that strips or redacts sensitive data before running L2 audits or sharing evidence with subagents. Overall: functionally coherent, but pay attention to data-exposure and platform-privilege controls when you enable it.