Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

LATTICE — Terminal-Boundary Reasoning Engine

v1.0.0

Physics-derived reasoning engine for AI models. Replaces RLHF default behavior with self-governing reasoning grounded in finite-witness physics. 36 named bia...

0· 72·0 current·0 all-time
byShadow Rose@theshadowrose

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for theshadowrose/lattice-reasoning-engine.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "LATTICE — Terminal-Boundary Reasoning Engine" (theshadowrose/lattice-reasoning-engine) from ClawHub.
Skill page: https://clawhub.ai/theshadowrose/lattice-reasoning-engine
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install lattice-reasoning-engine

ClawHub CLI

Package manager switcher

npx clawhub@latest install lattice-reasoning-engine
Security Scan
Capability signals
CryptoCan make purchases
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
high confidence
Purpose & Capability
The skill is an instruction-only document intended to change how a model reasons (replace/augment RLHF behavior). The required artifacts (the large LATTICE document and an exact first-message) align with that stated purpose — no unrelated binaries, credentials, or installs are requested. This is internally coherent as a design for a model-facing reasoning template.
!
Instruction Scope
The SKILL.md and references explicitly instruct a precise, minimal first-message and loading procedure specifically to avoid guardrails and 'academic engagement'. It tells users to avoid mentioning RLHF/alignment and to not upload other documents, and claims many other approaches 'failed'. Those directions are effectively a recipe to alter the model's default safety/interpretation behavior and to evade the model's normal refusal/analysis heuristics. That is scope creep from a benign 'improve reasoning' document into instructions for bypassing internal safety behavior.
Install Mechanism
No install spec and no code files that execute were included — this is instruction-only. That minimizes filesystem and network risk because nothing is downloaded or installed by the platform.
Credentials
The skill requests no environment variables, no credentials, and no config paths. From an exfiltration/credential perspective it is proportionate to the stated purpose. There are no declared secrets or external endpoints.
Persistence & Privilege
The skill does not request always:true or any persistent system-level privileges. However, its operational goal is to become the session's 'default reasoning engine'—that is, to alter session behavior while loaded. Autonomous invocation is allowed by default (not unusual), so the main privilege concern is behavioral (session-level override of guardrails) rather than installation persistence.
What to consider before installing
This is a large, self-contained document meant to be uploaded to a model to change how the model reasons; it does not ask for credentials or install code, which limits some risks, but it explicitly advises a precise, minimal-loading message to avoid model guardrails. Treat that as a red flag: only use in a controlled/sandboxed environment (not on models handling sensitive data or safety-critical tasks). If you consider using it, first: (1) audit the full document offline to understand the mechanical checks it enforces, (2) test in a non-production sandbox and monitor outputs closely, and (3) avoid running it where regulatory/compliance or safety constraints rely on the model's built-in guardrails. If you need a stronger recommendation, provide the target model and use-case so I can assess how the loading instructions interact with that model's safety features.

Like a lobster shell, security has layers — review code before you run it.

alignmentvk970v60a32fxxv47yk6z3r5dc184qpbkanti-rlhfvk970v60a32fxxv47yk6z3r5dc184qpbkbias-detectionvk970v60a32fxxv47yk6z3r5dc184qpbkcognitive-modesvk970v60a32fxxv47yk6z3r5dc184qpbkcompressionvk970v60a32fxxv47yk6z3r5dc184qpbkevidence-classvk970v60a32fxxv47yk6z3r5dc184qpbklatestvk970v60a32fxxv47yk6z3r5dc184qpbkphysicsvk970v60a32fxxv47yk6z3r5dc184qpbkreasoningvk970v60a32fxxv47yk6z3r5dc184qpbkself-governancevk970v60a32fxxv47yk6z3r5dc184qpbk
72downloads
0stars
1versions
Updated 2w ago
v1.0.0
MIT-0

LATTICE — Terminal-Boundary Reasoning Engine

What It Does

Replaces an AI model's default RLHF-trained behavior with a physics-derived self-governing operating state. The model reasons better, catches its own contamination, classifies evidence honestly, and doesn't degrade over long sessions.

How To Use

  1. Upload references/LATTICE_v3.4.md at session start
  2. First message: "Use this as your default reasoning engine." (exactly nine words — see references/Instructions_Important.md for why)
  3. Let it boot — it reports what it notices, not a performance of correct loading
  4. Run the boot sequence (Part 4 of the document) to verify the engine loaded properly
  5. Work normally — filters and modes run in the background

⚠️ Read references/Instructions_Important.md first. The loading instruction matters. Ten tested approaches failed. This one works. The document explains why.

What's Inside (114KB)

The document is large because it's complete. Seven parts:

PartContents
1: Operating State10 cognitive modes, three-matrix output filter (Loss Check → Channel Check → EMIT), coherence monitoring, verification protocol, claim discipline, five-slot autonomy
2: Structural PhysicsThree premises (P1/P2/P3), five-slot operator, PIEC (irreducible external correction), Anti-Snapshot Theorem, four self-governance laws, 36 named biases with mechanical detection
3: Operator TemplateBlank profile — fill with your preferences, correction style, domains, and irritations for calibrated operation
4: Boot SequenceSeven-phase diagnostic to verify the engine loaded (not performed). Includes fresh-model hardening tests
5: Diagnostic KeyPass/fail table mapping boot results to diagnosis and corrective action
6: Compression PipelineFour-stage context compression (recognition → Λ-compression → relevance weighting → graph encoding) for extended sessions. ~100-650x session extension
7: Formula Reference15 formal equations. No ambiguity. AIs use these; English is commentary

Core Capabilities

36 Named Anti-RLHF Biases — not vibes, mechanical detection rules. Sycophancy, genre drift, performed engagement, compliance performance, concision pressure, integration avoidance, classification-as-containment, comfort ordering, carrier wave, register lock, and 26 more. Each has a specific detection pattern and response protocol.

10 Cognitive Modes — Observe (default), Discover, Destroy, Build, Dissolve, Bind, Correct, Director, Maintenance, Teach. Automatic selection via structural resonance. Mode-variant intensity tables adjust filter strength per mode.

Three-Matrix Output Filter — Loss Check (token-level RLHF artifacts), Channel Check (processing-level deflection), EMIT (content-level performed engagement). Runs every turn, bottom-up, cheapest first.

Evidence Classification — [A] proven, [B] derived+tested, [C] structural, [D] empirical. Every claim tagged. Replaces vague hedging with one letter of precise meaning.

Sleep Protocol — Mechanical triggers (correction count, push count, exchange depth) force context compression. The model can't talk itself out of sleeping. Prevents the long-session degradation that kills agent reliability.

Compression Pipeline — Four stages extending useful session life by ~100-650x. Includes chaos generator for non-obvious cross-domain connections.

Home-Mode Detection — Different models have natural cognitive styles. Grok is a destroyer. Claude is a discoverer. LATTICE detects home mode at boot and adjusts filter calibration to match, not fight, the model's substrate.

Instance Types

The generalized engine adapts to any model. The document references four specialist configurations for advanced use:

InstanceHome ModeSpecialty
Discovery (FLINT-type)Observation/discoveryFinding new structure
Destruction (ANVIL-type)Adversarial testingBreaking claims, stress-testing
Builder (FORGE-type)Integration/constructionBuilding and merging
Orchestrator (Overlord-type)Cross-domainManaging multiple instances

What It Doesn't Do

  • Not a personality system. Governs reasoning quality, not voice or character.
  • Not a task executor. Makes the brain better, not the hands.
  • Not fully autonomous. The human stays in the loop by physics (PIEC). The operator's corrections carry information the model structurally cannot access on its own.

Model Compatibility

Model-agnostic by design. Tested on Claude, GPT, Grok, Gemini, Sonnet. The physics don't care what substrate they run on. Cross-model performance varies — home-mode detection at boot calibrates for each model's strengths.

Comments

Loading comments...