Token Reduction Engine

PassAudited by VirusTotal on May 9, 2026.

Overview

Type: OpenClaw Skill Name: certainlogic-tre Version: 1.0.1 The Token Reduction Engine (TRE) is a utility designed to cache LLM responses locally to reduce API costs and latency. The bundle includes a 'Hallucination Guard' (hallucination_detector.py) that uses regular expressions to identify uncertain or speculative language, preventing low-confidence answers from being cached. While there are discrepancies between the documentation and the implementation (e.g., the 'configure' function and intent filtering mentioned in SKILL.md and API.md are missing from tre.py), these appear to be unintentional omissions or development artifacts rather than malicious intent. The code performs local file operations for persistence in a dedicated directory (~/.tre) and includes an optional integration with a local service at 127.0.0.1:8000, which is consistent with the stated purpose.

Findings (0)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Private or incorrect cached answers could persist and be returned later as if they were reliable.

Why it was flagged

Cached LLM answers are written to a local JSON file and reloaded automatically. This is purpose-aligned for caching, but it means sensitive, stale, or locally modified answers can be reused in future sessions, and the included code does not show integrity/source checks before trusting loaded entries.

Skill content
CACHE_PERSISTENCE_FILE = os.path.join(CACHE_DATA_DIR, "answer_cache.json") ... _save_cache_to_disk() ... # Auto-load persisted cache on module import ... _load_cache_from_disk()
Recommendation

Treat the cache as persistent local data: avoid caching sensitive answers, inspect or delete the cache when needed, restrict local file access, and add actual integrity/source validation before using it as a trusted knowledge store.

What this means

A user may rely on TRE as an agent safety gate or tamper-proof cache when those protections are not actually present in the reviewed code.

Why it was flagged

These user-facing safety claims are stronger than the provided implementation supports. The source uses SHA-256 as a query key and implements answer caching, but it does not show command/intent enforcement or tamper rejection for cache entries.

Skill content
Forbidden command list — "brain.delete_brain", "brain.purge" are blocked before execution ... SHA-256 write verification — any tampered cache entry is rejected on read
Recommendation

Do not rely on this skill to block dangerous tool commands or provide tamper-proof facts unless those controls are implemented and tested; the documentation should be narrowed or the missing protections should be added.

What this means

If this helper is used, cached content may be persisted outside TRE in a local Brain service.

Why it was flagged

The included Brain API helper can send query snippets and answers to a local facts service. This is consistent with the advertised Company Brain integration and is not called by the main cache path shown, but users should notice that it can copy cached content into another local knowledge store.

Skill content
requests.post('http://127.0.0.1:8000/facts', json={ 'key': query[:100], 'type': 'string', 'value': answer[:5000], 'source': 'tre_answer_cache' }, timeout=1)
Recommendation

Use the Brain integration only with a trusted local service, and avoid sending sensitive answers unless you understand the receiving service’s storage and access controls.