Cognitive Debt Guard

v1.0.0

Cognitive Debt Guard - Prevent the 23.5% incident spike from AI-generated code. Comprehension gates, review frameworks, and AI-free zones. Based on 2026 rese...

0· 97·0 current·0 all-time
byErwin@aptratcn

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for aptratcn/xiaobai-cognitive-debt-guard.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Cognitive Debt Guard" (aptratcn/xiaobai-cognitive-debt-guard) from ClawHub.
Skill page: https://clawhub.ai/aptratcn/xiaobai-cognitive-debt-guard
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install xiaobai-cognitive-debt-guard

ClawHub CLI

Package manager switcher

npx clawhub@latest install xiaobai-cognitive-debt-guard
Security Scan
Capability signals
CryptoCan make purchases
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description match SKILL.md content: this is a prescriptive review framework and set of team practices for handling AI-generated code. It does not request unrelated binaries, credentials, or install steps.
Instruction Scope
The runtime instructions are behavioral (checklists, templates, trigger phrases). They recommend keeping a MEMORY.md open and applying comprehension gates but do not contain commands, file reads, or network calls. One operational note: the skill defines activation trigger phrases — verify how the agent platform interprets those (manual invocation vs automatic hooks) to avoid unexpected activations. Also confirm whether MEMORY.md is stored in a repo/workspace and whether the agent would be able to open it in your environment (the skill itself does not request access).
Install Mechanism
No install spec and no code files — instruction-only skill, so nothing is written to disk or fetched at install time.
Credentials
No environment variables, credentials, or config paths are requested. The guidance about 'AI-free zones' and MEMORY.md is policy-level and does not require secrets or external keys.
Persistence & Privilege
always:false and user-invocable:true (defaults). The skill does not request permanent presence, nor does it instruct modifying other skills or global agent settings.
Assessment
This skill is a set of human-centered review rules and is internally consistent with its stated purpose. Because it is instruction-only and requests no credentials or installs, it has a low technical risk profile. Before installing/activating: (1) confirm how your agent platform maps the listed trigger phrases to actions so it won't run unexpectedly; (2) decide where MEMORY.md will live (repo, wiki, or editor) and ensure it does not contain sensitive secrets; (3) treat the skill as policy/advice — it won't automatically enforce checks unless your agent environment has automations that act on its outputs. If you need strict enforcement, pair these guidelines with CI checks or tooling that you control.

Like a lobster shell, security has layers — review code before you run it.

ai-safetyvk9704t2xf7wc5mvrqzd6cm6j2h8581hvcode-qualityvk9704t2xf7wc5mvrqzd6cm6j2h8581hvcognitive-debtvk9704t2xf7wc5mvrqzd6cm6j2h8581hvlatestvk9704t2xf7wc5mvrqzd6cm6j2h8581hv
97downloads
0stars
1versions
Updated 1w ago
v1.0.0
MIT-0

Cognitive Debt Guard 🧠

Prevent the 23.5% incident spike from AI-generated code.

The Problem (2026 Research)

MetricImpact
Incident rate+23.5% per PR with AI code
Code churn3.1% → 5.7% (nearly doubled)
Developer speed-19% slower with AI tools (experienced devs)
Trust in AI output33% (down from higher)

Root cause: Teams ship code faster than they understand it.

Definition: Cognitive debt = the gap between what your codebase does and what your team comprehends about it.

Unlike technical debt (code you know is bad), cognitive debt is code you don't even know is bad — because you never understood it.

The Solution: 5 Patterns

Pattern 1: Maintain MEMORY.md 🔒

Living architecture context for humans and AI agents.

# MEMORY.md Template

## Architecture Decisions
- [Decision 1]: Why we chose X over Y
- [Decision 2]: Trade-offs we accepted

## AI-Free Zones (human must own completely)
- Authentication & authorization
- Payment processing
- Data deletion
- Database migrations
- Security-critical paths

## Conventions
- Naming: [rules]
- Error handling: [pattern]
- Testing: [requirements]

## Known Constraints
- [Performance requirement]
- [Compliance requirement]
- [Integration dependency]

Rule: MEMORY.md is open in editor at all times when working with AI.

Pattern 2: Comprehension Gate 🔒

3 questions before accepting AI-generated code:

Before you click "Accept" on AI output:

1. Can I explain what this code does in plain language?
   [ ] Yes → Continue
   [ ] No → STOP. Read until you can.

2. Can I trace the data flow from input to output?
   [ ] Yes → Continue
   [ ] No → STOP. Add comments or simplify.

3. If this breaks in production, would I know where to look?
   [ ] Yes → Accept
   [ ] No → STOP. Add logging or documentation.

Rule: All 3 must be YES before merge.

Pattern 3: Pair with Agents, Don't Delegate 🔒

Active Use ✅Passive Use ❌
Prompt → Read → Understand → Modify → ShipPrompt → Accept → Ship → Forget
You steer, AI fillsAI decides, you accept
Comprehension maintainedCognitive debt accumulates

Rule: Never accept >50 lines of AI code without reading and understanding every line.

Pattern 4: Shrink the Blast Radius 🔒

AI-assisted PR limits:

ConstraintLimit
Max lines per AI PR200
Concerns per PR1
Test coverage on AI paths100%
Files touched≤5

Why: Smaller PRs = easier to comprehend = less cognitive debt.

Pattern 5: Quarterly Comprehension Audit 🔒

90-minute sprint ceremony:

## Cognitive Debt Audit Agenda

1. Review top 5 AI-heaviest PRs from last quarter
2. For each PR, ask:
   - Can we still explain what it does?
   - Have we had incidents related to it?
   - Is documentation up to date?
3. Identify cognitive debt hotspots
4. Plan debt reduction for next sprint
5. Update MEMORY.md with new learnings

Code Review Framework (5 Layers)

When reviewing AI-generated code:

Layer 1: Comprehension
- Can I understand this without running it?
- Is naming clear?
- Is complexity justified?

Layer 2: Correctness
- Does it do what it claims?
- Edge cases covered?
- Error handling present?

Layer 3: Integration
- Fits existing patterns?
- No duplicate functionality?
- Dependencies appropriate?

Layer 4: Security
- No exposed secrets?
- Input validation?
- AI-free zone respected?

Layer 5: Maintainability
- Tests included?
- Documentation added?
- Will I understand this in 6 months?

Trigger Phrases

This skill activates when:

  • User accepts AI-generated code
  • User asks about code review
  • User mentions "AI code", "generated code", "copilot wrote"
  • User is about to merge AI-assisted PR
  • User asks "should I accept this?"

Quick Reference Card

Before Accepting AI Code:
1. Read it (all of it)
2. Explain it (out loud if needed)
3. Trace data flow
4. Check AI-free zone
5. Limit: 200 lines, 1 concern

Integration

  • EVR Framework — Verify comprehension before claiming "reviewed"
  • Systematic Debugging — When cognitive debt causes incidents
  • Memory Guard — MEMORY.md persists across sessions

Statistics (cite in discussions)

  • METR 2025: -19% speed for experienced devs using AI
  • Cortex 2026: +23.5% incidents per PR
  • GitClear: Code churn 3.1% → 5.7%
  • Stack Overflow 2025: 33% trust in AI output

License

MIT

Comments

Loading comments...