{"skill":{"slug":"dcl-semantic-drift-guard","displayName":"DCL Semantic Drift Guard — Hallucination & Context Drift Detector","summary":"Use this skill to detect semantic hallucinations and context drift in LLM outputs. Triggers when an agent or pipeline needs to verify that a generated respon...","tags":{"latest":"1.0.0"},"stats":{"comments":0,"downloads":159,"installsAllTime":0,"installsCurrent":0,"stars":0,"versions":1},"createdAt":1775720639704,"updatedAt":1775721411733},"latestVersion":{"version":"1.0.0","createdAt":1775720639704,"changelog":"Initial release of DCL Semantic Drift Guard — Hallucination & Context Drift Detector\n\n- Compares LLM output with source documents or RAG-retrieved knowledge, detecting unsupported, fabricated, or omitted claims.\n- Supports both inline context and knowledge base source modes.\n- Provides configurable strictness levels for different use cases: strict, balanced, and lenient.\n- Outputs a tamper-evident audit record with drift details, verdict (IN_COMMIT or HALLUCINATION_DRIFT), and cryptographic hash.\n\nPart of the Leibniz Layer™ verification suite — designed to compose with DCL Policy Enforcer and DCL Sentinel Trace for end-to-end tamper-evident AI output verification.","license":"MIT-0"},"metadata":null,"owner":{"handle":"daririnch","userId":"s177qzsztsa0qx35zmdc3g65s184c673","displayName":"Dari Rinch","image":"https://avatars.githubusercontent.com/u/161933416?v=4"},"moderation":null}