DCL Semantic Drift Guard — Hallucination & Context Drift Detector

v1.0.0

Use this skill to detect semantic hallucinations and context drift in LLM outputs. Triggers when an agent or pipeline needs to verify that a generated respon...

0· 0·0 current·0 all-time
byDari Rinch@daririnch
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name, description, and runtime instructions align: the skill verifies LLM outputs against a provided context or a caller-specified kb_endpoint. One minor mismatch: the SKILL.md promises a DCL Evaluator 'audit_chain_id' / Merkle leaf, but it doesn't specify an external DCL service endpoint, credentials, or how/where the chain is published. This can be harmless if the chain is generated locally, but it should be clarified if the skill is expected to publish records externally.
Instruction Scope
Instructions are narrowly scoped to chunking the provided source (or querying a caller-supplied kb_endpoint), decomposing LLM output into claims, cross-referencing, applying a strictness filter, and computing a tamper-evident hash/record. The skill does not instruct reading unrelated files or environment variables. The only external network activity implied is contacting the kb_endpoint supplied at invocation (expected behavior for RAG).
Install Mechanism
No install spec and no code files — this is instruction-only. Nothing will be downloaded or written to disk by an installer step as part of the skill package.
Credentials
The skill declares no required environment variables, credentials, or config paths. That is proportional to its stated purpose because all source material or RAG endpoints are provided as inputs at invocation.
Persistence & Privilege
always is false and the skill does not request any persistent system privileges or attempt to modify other skills or system-wide settings. Autonomous invocation is allowed by default but that's expected for a skill; nothing here increases privilege beyond normal.
Assessment
This skill is instruction-only and appears to do what it says: compare LLM output to a provided document or to results fetched from a kb_endpoint. Before using it: (1) only pass sources and kb_endpoint URLs you trust — the skill will query whatever kb_endpoint you provide, so don't point it at untrusted external services or share sensitive documents with unknown endpoints; (2) confirm how you want the DCL audit record handled — the SKILL.md produces a tx_hash and an audit_chain_id but does not specify an external DCL service or publishing step, so if you expect the record to be posted to Fronesis/DCL infrastructure you should request details (endpoint and auth) from the publisher; (3) prefer 'strict' for high-risk outputs (contracts, legal, medical) and understand the strictness tradeoffs. Overall the skill is internally consistent, but verify expected external publishing semantics before relying on its audit-chain claims.

Like a lobster shell, security has layers — review code before you run it.

latestvk97b6fyvkddv906m81r7qnm5vn84hbq3

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments