CertainLogic Verifier - Hallucination Guard

v1.0.0

Install, configure, and use CertainLogic Verifier (hallucination‑guard) – deterministic AI verification middleware that catches hallucinations before they re...

0· 69·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for certainlogicai/certainlogic-verifier.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "CertainLogic Verifier - Hallucination Guard" (certainlogicai/certainlogic-verifier) from ClawHub.
Skill page: https://clawhub.ai/certainlogicai/certainlogic-verifier
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install certainlogic-verifier

ClawHub CLI

Package manager switcher

npx clawhub@latest install certainlogic-verifier
Security Scan
Capability signals
CryptoCan make purchasesRequires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description match the provided artifacts: SKILL.md, API docs, sample facts, integration guides, docker-compose, and an installer script all describe a local verifier service that validates LLM responses against a facts DB and provides caching/audit logs. Required resources and integrations (local HTTP endpoints, optional OpenRouter fallback) are coherent with that purpose.
Instruction Scope
Instructions are narrowly scoped to installing and running a local web service, populating a facts DB, and calling local endpoints. They do not request access to unrelated system files or credentials. One minor scope note: the docs advertise a one‑line installer (curl | bash) pattern in the install.sh comment and provide a script that clones and pip‑installs the repo — this is normal for self‑hosted projects but gives the operator discretion to run arbitrary code from the referenced repository, so inspect before executing.
Install Mechanism
No platform installer declared in registry (instruction‑only), but the SKILL.md and scripts instruct git‑cloning a GitHub repo and running pip install -r requirements.txt (and include docker/docker‑compose options). GitHub is a common release host, so this is expected, but installing packages from an unreviewed requirements.txt and running the repo code executes remote code — review requirements.txt, Dockerfiles, and main application code before installing in production.
Credentials
Registry metadata listed no required env vars, but the documentation references a small set of environment variables (PRODUCT_MODE, LOG_LEVEL, CACHE_DIR) and an optional OPENROUTER_API_KEY used only for cache‑miss warmup/fallback. That key is proportional to the advertised 'warm‑up using OpenRouter' feature, but it is sensitive — only provide it if you intend to enable that functionality and run the service in a trusted network. Docker examples include a placeholder DB password ('changeme') — replace defaults in production.
Persistence & Privilege
The skill does not request always:true, does not require modifying other skills, and does not declare persistent platform‑level privileges. It is a self‑hosted service the operator runs; nothing in the package auto‑enables itself across other agent configs.
Assessment
This package appears coherent with a self‑hosted hallucination‑guard: it clones a GitHub repo, installs Python deps, and runs a local HTTP service that your agents call. Before installing: (1) inspect the repository (requirements.txt, Dockerfile, main app) and verify the GitHub project exists and is trustworthy; (2) avoid piping unknown curl output into bash — clone and review first; (3) run in an isolated environment (container or VM) and limit network access if you want an air‑gapped setup; (4) do not provide sensitive credentials unless you need the OpenRouter fallback (OPENROUTER_API_KEY) and trust that service; (5) replace default passwords (e.g., postgres changeme) and audit the audit_log.jsonl storage/rotation. If you want higher assurance, ask the maintainer for a signed release or a reproducible build and confirm the repo URL/ownership before deployment.

Like a lobster shell, security has layers — review code before you run it.

latestvk978tjk6hejscqxzcqahp9bhpd85ckcf
69downloads
0stars
1versions
Updated 4d ago
v1.0.0
MIT-0

Hallucination Guard – CertainLogic Verifier

Overview

CertainLogic Verifier is an open‑source, self‑hosted middleware layer that sits between your LLM calls and your application. It validates every AI response against a verified facts database, flags hallucinations, caches verified answers (bypassing the LLM), and provides cryptographic audit logs.

Key capabilities:

  • 99%+ hallucination block rate – rule‑based checks + TF‑IDF memory search against your facts_db
  • 85‑98% token savings – semantic cache hits skip the LLM entirely
  • Self‑hosted & air‑gapped – nothing leaves your infrastructure; ready for HIPAA/GDPR/SOC2/FedRAMP
  • MIT licensed – no proprietary lock‑in; inspect every validation rule
  • Deterministic grounding – same query → same verified answer, every time
  • Cryptographic audit logs – SHA‑256 chained JSONL for compliance

Quick Start (2‑Minute Install)

# Clone the repository
git clone https://github.com/CertainLogicAI/hallucination-guard
cd hallucination-guard

# Set up Python environment
python3 -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate
pip install -r requirements.txt

# Start the service
uvicorn main:app --host 0.0.0.0 --port 8000

Verify it's working:

curl -X POST http://localhost:8000/validate \
  -d '{"query": "What is the price of GPT‑5?", "response": "\$200/month"}'

Installation Options

1. Docker (Recommended for Production)

docker build -t hallucination-guard .
docker run -p 8000:8000 hallucination-guard

2. Kubernetes/Helm

See deploy/helm/ in the repository for production‑ready Helm charts.

3. Systemd Service

A sample systemd unit file is included at deploy/systemd/hallucination-guard.service.

Configuration

Facts Database

The verifier checks responses against facts_db.json. Populate it with your domain‑specific verified facts.

Example entry:

{
  "fact": "Python was created in 1991 by Guido van Rossum",
  "category": "programming",
  "source": "official Python history",
  "verified_at": "2026‑04‑20"
}

Adding facts:

  • Manually edit facts_db.json
  • Use the /facts/add endpoint (POST with JSON)
  • Bulk‑load from documents via the /warming/extract endpoint

Environment Variables

Set these in .env or as environment variables:

PRODUCT_MODE=coder           # coder|agent (determines rate limits)
OPENROUTER_API_KEY=your_key  # Required for cache‑miss fallback
LOG_LEVEL=INFO              # DEBUG|INFO|WARNING|ERROR
CACHE_DIR=./cache           # Persistent cache storage

Usage

Validating a Single Response

import requests

response = requests.post(
    "http://localhost:8000/validate",
    json={
        "query": "What year was Python created?",
        "response": "Python was created in 1991."
    }
)
print(response.json())

Integrating with AI Agent Pipelines

Place the verifier between your LLM call and your application logic:

def get_ai_response(query):
    # 1. Check cache first
    cache_check = requests.post("http://localhost:8000/cache/check", 
                                json={"query": query})
    if cache_check.json().get("cached"):
        return cache_check.json()["response"]
    
    # 2. Call LLM
    llm_response = call_llm(query)
    
    # 3. Validate
    validation = requests.post("http://localhost:8000/validate",
                               json={"query": query, "response": llm_response})
    
    if validation.json().get("valid"):
        return llm_response
    else:
        # Handle hallucination
        raise ValueError(f"Hallucination detected: {validation.json()}")

Cache Management

  • View cache stats: GET /cache/stats
  • Clear cache: POST /cache/clear
  • Warm cache: POST /warming/run (requires OpenRouter API key)

Advanced Features

Deterministic Memory Search

The verifier uses TF‑IDF similarity to match queries against known facts, even with paraphrasing.

Uncertainty Detection

Responses containing "I think", "might be", "not sure" are penalized and flagged for review.

Numeric‑Unit Matching

Checks that numeric values match known facts with correct units (e.g., "5 km" vs "5 miles").

Audit Logs

All validations are logged to audit_log.jsonl with SHA‑256 chaining for tamper evidence.

Resources

scripts/

  • install.sh – One‑line installer for Linux/macOS
  • docker-compose.yml – Multi‑service setup with PostgreSQL for audit logs

references/

  • api-reference.md – Complete API documentation
  • facts-schema.md – Facts database schema and validation rules
  • integration-guide.md – Step‑by‑step integration with popular AI frameworks

assets/

  • sample-facts.json – Example facts database with 50+ verified entries
  • docker-compose.prod.yml – Production‑ready Docker Compose configuration

Support & Community

License

MIT – see LICENSE in the repository.

Comments

Loading comments...